title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Metadata APIs
|
Chapter 1. Metadata APIs 1.1. APIRequestCount [apiserver.openshift.io/v1] Description APIRequestCount tracks requests made to an API. The instance name must be of the form resource.version.group , matching the resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object 1.3. ComponentStatus [v1] Description ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ Type object 1.4. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 1.5. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object 1.6. Event [events.k8s.io/v1] Description Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.7. Event [v1] Description Event is a report of an event somewhere in the cluster. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.8. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 1.9. Namespace [v1] Description Namespace provides a scope for Names. Use of multiple namespaces is optional. Type object
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/metadata_apis/metadata-apis
|
Release notes for Red Hat build of OpenJDK 21.0.3
|
Release notes for Red Hat build of OpenJDK 21.0.3 Red Hat build of OpenJDK 21 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.3/index
|
Builds using Shipwright
|
Builds using Shipwright OpenShift Dedicated 4 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_shipwright/index
|
Preface
|
Preface As a cloud administrator, you can manage projects, users, and roles. Projects are organizational units in the cloud to which you can assign users. Projects(tenants) are also known as accounts. Users can be members of one or more projects. Roles define the actions that users can perform. Each OpenStack deployment must include at least one project, one user, and one role, linked together. As a cloud administrator, you can add, update, and delete projects and users, assign users to one or more projects, and change or remove these assignments. You can manage projects and users independently from each other. You can also configure user authentication with the Keystone identity service to control access to services and endpoints. Keystone provides token-based authentication and can integrate with LDAP and Active Directory, so you can manage users and identities externally and synchronize the user data with Keystone. Note Keystone v2 was deprecated in Red Hat OpenStack Platform 11 (Ocata). It was removed in Red Hat OpenStack Platform 13 (Queens), leaving only Keystone v3 available.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/users_and_identity_management_guide/pr01
|
Chapter 1. Overview
|
Chapter 1. Overview AMQ .NET is a lightweight AMQP 1.0 library for the .NET platform. It enables you to write .NET applications that send and receive AMQP messages. AMQ .NET is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.9 Release Notes . AMQ .NET is based on AMQP.Net Lite . For detailed API documentation, see the AMQ .NET API reference . 1.1. Key features SSL/TLS for secure communication Flexible SASL authentication Seamless conversion between AMQP and native data types Access to all the features and capabilities of AMQP 1.0 An integrated development environment with full IntelliSense API documentation 1.2. Supported standards and protocols AMQ .NET supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms ANONYMOUS, PLAIN, and EXTERNAL Modern TCP with IPv6 1.3. Supported configurations AMQ .NET supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 and 8 with .NET Core 3.1 Microsoft Windows 10 Pro with .NET Core 3.1 or .NET Framework 4.7 Microsoft Windows Server 2012 R2 and 2016 with .NET Core 3.1 or .NET Framework 4.7 AMQ .NET is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Connection A channel for communication between two peers on a network Session A context for sending and receiving messages Sender link A channel for sending messages to a target Receiver link A channel for receiving messages from a source Source A named point of origin for messages Target A named destination for messages Message A mutable holder of application data AMQ .NET sends and receives messages . Messages are transferred between connected peers over links . Links are established over sessions . Sessions are established over connections . A sending peer creates a sender link to send messages. The sender link has a target that identifies a queue or topic at the remote peer. A receiving client creates a receiver link to receive messages. The receiver link has a source that identifies a queue or topic at the remote peer. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
|
[
"cd <project-dir>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_.net_client/overview
|
Part II. API versioning
|
Part II. API versioning
| null |
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/administering_the_api_gateway/api_versioning
|
probe::scsi.ioexecute
|
probe::scsi.ioexecute Name probe::scsi.ioexecute - Create mid-layer SCSI request and wait for the result Synopsis Values retries Number of times to retry request device_state_str The current state of the device, as a string dev_id The scsi device id channel The channel number data_direction The data_direction specifies whether this command is from/to the device. lun The lun number timeout Request timeout in seconds request_bufflen The data buffer buffer length host_no The host number data_direction_str Data direction, as a string device_state The current state of the device request_buffer The data buffer address
|
[
"scsi.ioexecute"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-scsi-ioexecute
|
Chapter 12. File System Check
|
Chapter 12. File System Check Filesystems may be checked for consistency, and optionally repaired, with filesystem-specific userspace tools. These tools are often referred to as fsck tools, where fsck is a shortened version of file system check . Note These filesystem checkers only guarantee metadata consistency across the filesystem; they have no awareness of the actual data contained within the filesystem and are not data recovery tools. Filesystem inconsistencies can occur for various reasons, including but not limited to hardware errors, storage administration errors, and software bugs. Before modern metadata-journaling filesystems became common, a filesystem check was required any time a system crashed or lost power. This was because a filesystem update could have been interrupted, leading to an inconsistent state. As a result, a filesystem check is traditionally run on each filesystem listed in /etc/fstab at boot-time. For journaling filesystems, this is usually a very short operation, because the filesystem's metadata journaling ensures consistency even after a crash. However, there are times when a filesystem inconsistency or corruption may occur, even for journaling filesystems. When this happens, the filesystem checker must be used to repair the filesystem. The following will provide best practices and other useful information when performing this procedure. Important Red Hat does not recommend this unles the machine does not boot, the file system is extremely large, or the file system is on remote storage. It is possible to disable file system check at boot by setting the sixth field in /etc/fstab to 0. 12.1. Best Practices for fsck Generally, running the filesystem check and repair tool can be expected to automatically repair at least some of the inconsistencies it finds. In some cases, severely damaged inodes or directories may be discarded if they cannot be repaired. Significant changes to the filesystem may occur. To ensure that unexpected or undesirable changes are not permanently made, perform the following precautionary steps: Dry run Most filesystem checkers have a mode of operation which checks but does not repair the filesystem. In this mode, the checker will print any errors that it finds and actions that it would have taken, without actually modifying the filesystem. Note Later phases of consistency checking may print extra errors as it discovers inconsistencies which would have been fixed in early phases if it were running in repair mode. Operate first on a filesystem image Most filesystems support the creation of a metadata image , a sparse copy of the filesystem which contains only metadata. Because filesystem checkers operate only on metadata, such an image can be used to perform a dry run of an actual filesystem repair, to evaluate what changes would actually be made. If the changes are acceptable, the repair can then be performed on the filesystem itself. Note Severely damaged filesystems may cause problems with metadata image creation. Save a filesystem image for support investigations A pre-repair filesystem metadata image can often be useful for support investigations if there is a possibility that the corruption was due to a software bug. Patterns of corruption present in the pre-repair image may aid in root-cause analysis. Operate only on unmounted filesystems A filesystem repair must be run only on unmounted filesystems. The tool must have sole access to the filesystem or further damage may result. Most filesystem tools enforce this requirement in repair mode, although some only support check-only mode on a mounted filesystem. If check-only mode is run on a mounted filesystem, it may find spurious errors that would not be found when run on an unmounted filesystem. Disk errors Filesystem check tools cannot repair hardware problems. A filesystem must be fully readable and writable if repair is to operate successfully. If a filesystem was corrupted due to a hardware error, the filesystem must first be moved to a good disk, for example with the dd(8) utility.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-fsck
|
2.5. Hosts
|
2.5. Hosts 2.5.1. Introduction to Hosts Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM). KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the Red Hat Virtualization Manager. A Red Hat Virtualization environment has one or more hosts attached to it. Red Hat Virtualization supports two methods of installing hosts. You can use the Red Hat Virtualization Host (RHVH) installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux installation. Note You can identify the host type of an individual host in the Red Hat Virtualization Manager by selecting the host's name. This opens the details view. Then look at the OS Description under Software . Hosts use tuned profiles, which provide virtualization optimizations. For more information on tuned , see the TuneD Profiles in Red Hat Enterprise Linux Monitoring and managing system status and performance . The Red Hat Virtualization Host has security features enabled. Security Enhanced Linux (SELinux) and the firewall are fully configured and on by default. The status of SELinux on a selected host is reported under SELinux mode in the General tab of the details view. The Manager can open required ports on Red Hat Enterprise Linux hosts when it adds them to the environment. A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 7 AMD64/Intel 64 version. A physical host on the Red Hat Virtualization platform: Must belong to only one cluster in the system. Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions. Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation. Has a minimum of 2 GB RAM. Can have an assigned system administrator with system permissions. Administrators can receive the latest security advisories from the Red Hat Virtualization watch list. Subscribe to the Red Hat Virtualization watch list to receive new security advisories for Red Hat Virtualization products by email. Subscribe by completing this form: https://www.redhat.com/mailman/listinfo/rhsa-announce 2.5.2. Red Hat Virtualization Host Red Hat Virtualization Host (RHVH) is installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. It uses an Anaconda installation interface based on the one used by Red Hat Enterprise Linux hosts, and can be updated through the Red Hat Virtualization Manager or via yum . Using the yum command is the only way to install additional packages and have them persist after an upgrade. RHVH features a Cockpit web interface for monitoring the host's resources and performing administrative tasks. Direct access to RHVH via SSH or console is not supported, so the Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the Red Hat Virtualization Manager, such as configuring networking or running terminal commands via the Terminal sub-tab. Access the Cockpit web interface at https:// HostFQDNorIP :9090 in your web browser. Cockpit for RHVH includes a custom Virtualization dashboard that displays the host's health status, SSH Host Key, self-hosted engine status, virtual machines, and virtual machine statistics. Starting in Red Hat Virtualization version 4.4 SP1 the RHVH uses systemd-coredump to gather, save and process core dumps. For more information, see the documentation for core dump storage configuration files and systemd-coredump service . In Red Hat Virtualization 4.4 and earlier RHVH uses the Automatic Bug Reporting Tool (ABRT) to collect meaningful debug information about application crashes. For more information, see the Red Hat Enterprise Linux System Administrator's Guide . Note Custom boot kernel arguments can be added to Red Hat Virtualization Host using the grubby tool. The grubby tool makes persistent changes to the grub.cfg file. Navigate to the Terminal sub-tab in the host's Cockpit web interface to use grubby commands. See the Red Hat Enterprise Linux System Administrator's Guide for more information. Warning Do not create untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities. 2.5.3. Red Hat Enterprise Linux hosts You can use a Red Hat Enterprise Linux 7 installation on capable hardware as a host. Red Hat Virtualization supports hosts running Red Hat Enterprise Linux 7 Server AMD64/Intel 64 version with Intel VT or AMD-V extensions. To use your Red Hat Enterprise Linux machine as a host, you must also attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions. Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and the creation of a bridge. Use the details view to monitor the process as the host and management system establish a connection. Optionally, you can install a Cockpit web interface for monitoring the host's resources and performing administrative tasks. The Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the Red Hat Virtualization Manager, such as configuring networking or running terminal commands via the Terminal sub-tab. Important Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM. 2.5.4. Satellite Host Provider Hosts Hosts provided by a Satellite host provider can also be used as virtualization hosts by the Red Hat Virtualization Manager. After a Satellite host provider has been added to the Manager as an external provider, any hosts that it provides can be added to and used in Red Hat Virtualization in the same way as Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. 2.5.5. Host Tasks 2.5.5.1. Adding Standard Hosts to the Red Hat Virtualization Manager Important Always use the RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate) . Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge. Procedure From the Administration Portal, click Compute Hosts . Click New . Use the drop-down list to select the Data Center and Host Cluster for the new host. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field. Select an authentication method to use for the Manager to access the host. Enter the root user's password to use password authentication. Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication. Optionally, click the Advanced Parameters button to change the following advanced host settings: Disable automatic firewall configuration. Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide . Click OK . The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the Events section of the Notification Drawer ( ). After a brief delay the host status changes to Up . 2.5.5.2. Adding a Satellite Host Provider Host The process for adding a Satellite host provider host is almost identical to that of adding a Red Hat Enterprise Linux host except for the method by which the host is identified in the Manager. The following procedure outlines how to add a host provided by a Satellite host provider. Procedure Click Compute Hosts . Click New . Use the drop-down menu to select the Host Cluster for the new host. Select the Foreman/Satellite check box to display the options for adding a Satellite host provider host and select the provider from which the host is to be added. Select either Discovered Hosts or Provisioned Hosts . Discovered Hosts (default option): Select the host, host group, and compute resources from the drop-down lists. Provisioned Hosts : Select a host from the Providers Hosts drop-down list. Any details regarding the host that can be retrieved from the external provider are automatically set, and can be edited as desired. Enter the Name and SSH Port (Provisioned Hosts only) of the new host. Select an authentication method to use with the host. Enter the root user's password to use password authentication. Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication (Provisioned Hosts only). You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings. Optionally disable automatic firewall configuration. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. You can configure the Power Management , SPM , Console , and Network Provider using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure. Click OK to add the host and close the window. The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the details view. After installation is complete, the status will update to Reboot . The host must be activated for the status to change to Up . 2.5.5.3. Setting up Satellite errata viewing for a host In the Administration Portal, you can configure a host to view errata from Red Hat Satellite. After you associate a host with a Red Hat Satellite provider, you can receive updates in the host configuration dashboard about available errata and their importance, and decide when it is practical to apply the updates. Red Hat Virtualization 4.4 supports viewing errata with Red Hat Satellite 6.6. Prerequisites The Satellite server must be added as an external provider. The Manager and any hosts on which you want to view errata must be registered in the Satellite server by their respective FQDNs. This ensures that external content host IDs do not need to be maintained in Red Hat Virtualization. Important Hosts added using an IP address cannot report errata. The Satellite account that manages the host must have Administrator permissions and a default organization set. The host must be registered to the Satellite server. Use Red Hat Satellite remote execution to manage packages on hosts. Note The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely. Procedure Click Compute Hosts and select the host. Click Edit . Select the Use Foreman/Satellite check box. Select the required Satellite server from the drop-down list. Click OK . The host is now configured to show the available errata, and their importance, in the same dashboard used to manage the host's configuration. Additional resources Adding a Red Hat Satellite Instance for Host Provisioning Host Management Without Goferd and Katello Agent in the Red Hat Satellite document Managing Hosts 2.5.5.3.1. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. # vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... amd_iommu=on ... Note If intel_iommu=on or an AMD IOMMU is detected, you can try adding iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to the option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: # grub2-mkconfig -o /boot/grub2/grub.cfg # reboot 2.5.5.3.2. Enabling nested virtualization for all virtual machines Important Using hooks to enable nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Red Hat Technology Preview Features Support Scope . Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines . Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to Red Hat Virtualization (RHV) administrators. By default, nested virtualization is not enabled in RHV. To enable nested virtualization, you install a VDSM hook, vdsm-hook-nestedvt , on all of the hosts in the cluster. Then, all of the virtual machines that run on these hosts can function as parent virtual machines. You should only run parent virtual machines on hosts that support nested virtualization. If a parent virtual machine migrates to a host that does not support nested virtualization, its child virtual machines fail. To prevent this from happening, configure all of the hosts in the cluster to support nested virtualization. Otherwise, restrict parent virtual machines from migrating to hosts that do not support nested virtualization. Warning Take precautions to prevent parent virtual machines from migrating to hosts that do not support nested virtualization. Procedure In the Administration Portal, click Compute Hosts . Select a host in the cluster where you want to enable nested virtualization and click Management Maintenance and OK . Select the host again, click Host Console , and log into the host console. Install the VDSM hook: Reboot the host. Log into the host console again and verify that nested virtualization is enabled: If this command returns Y or 1 , the feature is enabled. Repeat this procedure for all of the hosts in the cluster. Additional resources VDSM hooks 2.5.5.3.3. Enabling nested virtualization for individual virtual machines Important Nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope . Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines . Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to Red Hat Virtualization (RHV) administrators. To enable nested virtualization on specific virtual machines , not all virtual machines, you configure a host or hosts to support nested virtualization. Then you configure the virtual machine or virtual machines on run on those specific hosts and enable Pass-Through Host CPU . This option lets the virtual machines use the nested virtualization settings you just configured on the host. This option also restricts which hosts the virtual machines can run on and requires manual migration. Otherwise, to enable nested virtualization for all of the virtual machines in a cluster, see Enabling nested virtualization for all virtual machines Only run parent virtual machines on hosts that support nested virtualization. If you migrate a parent virtual machine to a host that does not support nested virtualization, its child virtual machines will fail. Warning Do not migrate parent virtual machines to hosts that do not support nested virtualization. Avoid live migration of parent virtual machines that are running child virtual machines. Even if the source and destination hosts are identical and support nested virtualization, the live migration can cause the child virtual machines to fail. Instead, shut down virtual machines before migration. Procedure Configure the hosts to support nested virtualization: In the Administration Portal, click Compute Hosts . Select a host in the cluster where you want to enable nested virtualization and click Management Maintenance and OK . Select the host again, click Host Console , and log into the host console. In the Edit Host window, select the Kernel tab. Under Kernel boot parameters , if the checkboxes are greyed-out, click RESET . Select Nested Virtualization and click OK . This action displays a kvm-<architecture>.nested=1 parameter in Kernel command line . The following steps add this parameter to the Current kernel CMD line . Click Installation Reinstall . When the host status returns to Up , click Management Restart under Power Management or SSH Management . Verify that nested virtualization is enabled. Log into the host console and enter: If this command returns Y or 1 , the feature is enabled. Repeat this procedure for all of the hosts you need to run parent virtual machines. Enable nested virtualization in specific virtual machines: In the Administration Portal, click Compute Virtual Machines . Select a virtual machine and click Edit In the Edit Vitual Machine window, click Show Advanced Options and select the Host tab. Under Start Running On , click Specific Host and select the host or hosts you configured to support nested virtualization. Under CPU Options , select Pass-Through Host CPU . This action automatically sets the Migration mode to Allow manual migration only . Note In RHV version 4.2, you can only enable Pass-Through Host CPU when Do not allow migration is selected. Additional resources VDSM hooks Creating nested virtual machines in the RHEL documentation. 2.5.5.4. Moving a Host to Maintenance Mode Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. Hosts should be placed into maintenance mode before any event that might cause VDSM to stop working properly, such as a reboot, or issues with networking or storage. When a host is placed into maintenance mode the Red Hat Virtualization Manager attempts to migrate all running virtual machines to alternative hosts. The standard prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines. Note Virtual machines that are pinned to the host and cannot be migrated are shut down. You can check which virtual machines are pinned to the host by clicking Pinned to Host in the Virtual Machines tab of the host's details view. Placing a Host into Maintenance Mode Click Compute Hosts and select the desired host. Click Management Maintenance . This opens the Maintenance Host(s) confirmation window. Optionally, enter a Reason for moving the host into maintenance mode, which will appear in the logs and when the host is activated again. Then, click OK Note The host maintenance Reason field will only appear if it has been enabled in the cluster settings. See Cluster General Settings Explained for more information. Optionally, select the required options for hosts that support Gluster. Select the Ignore Gluster Quorum and Self-Heal Validations option to avoid the default checks. By default, the Manager checks that the Gluster quorum is not lost when the host is moved to maintenance mode. The Manager also checks that there is no self-heal activity that will be affected by moving the host to maintenance mode. If the Gluster quorum will be lost or if there is self-heal activity that will be affected, the Manager prevents the host from being placed into maintenance mode. Only use this option if there is no other way to place the host in maintenance mode. Select the Stop Gluster Service option to stop all Gluster services while moving the host to maintenance mode. Note These fields will only appear in the host maintenance window when the selected host supports Gluster. See Replacing the Primary Gluster Storage Node in Maintaining Red Hat Hyperconverged Infrastructure for more information. Click OK to initiate maintenance mode. All running virtual machines are migrated to alternative hosts. If the host is the Storage Pool Manager (SPM), the SPM role is migrated to another host. The Status field of the host changes to Preparing for Maintenance , and finally Maintenance when the operation completes successfully. VDSM does not stop while the host is in maintenance mode. Note If migration fails on any virtual machine, click Management Activate on the host to stop the operation placing it into maintenance mode, then click Cancel Migration on the virtual machine to stop the migration. 2.5.5.5. Activating a Host from Maintenance Mode A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used. Activation may fail if the host is not ready; ensure that all tasks are complete before attempting to activate the host. Procedure Click Compute Hosts and select the host. Click Management Activate . The host status changes to Unassigned , and finally Up when the operation is complete. Virtual machines can now run on the host. Virtual machines that were migrated off the host when it was placed into maintenance mode are not automatically migrated back to the host when it is activated, but can be migrated manually. If the host was the Storage Pool Manager (SPM) before being placed into maintenance mode, the SPM role does not return automatically when the host is activated. 2.5.5.5.1. Configuring Host Firewall Rules You can configure the host firewall rules so that they are persistent, using Ansible. The cluster must be configured to use firewalld . Note Changing the firewalld zone is not supported. Configuring Firewall Rules for Hosts On the Manager machine, edit ovirt-host-deploy-post-tasks.yml.example to add a custom firewall port: # vi /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml.example --- # # Any additional tasks required to be executing during host deploy process can # be added below # - name: Enable additional port on firewalld firewalld: port: " 12345/tcp " permanent: yes immediate: yes state: enabled Save the file to another location as ovirt-host-deploy-post-tasks.yml . New or reinstalled hosts are configured with the updated firewall rules. Existing hosts must be reinstalled by clicking Installation Reinstall and selecting Automatically configure host firewall . 2.5.5.5.2. Removing a Host Removing a host from your Red Hat Virtualization environment is sometimes necessary, such as when you need to reinstall a host. Procedure Click Compute Hosts and select the host. Click Management Maintenance . Once the host is in maintenance mode, click Remove . The Remove Host(s) confirmation window opens. Select the Force Remove check box if the host is part of a Red Hat Gluster Storage cluster and has volume bricks on it, or if the host is non-responsive. Click OK . 2.5.5.5.3. Updating Hosts Between Minor Releases You can update all hosts in a cluster , or update individual hosts . 2.5.5.5.3.1. Updating All Hosts in a Cluster You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of Red Hat Virtualization. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates. Update one cluster at a time. Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead. Procedure In the Administration Portal, click Compute Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster. Click Upgrade . Select the hosts to update, then click . Configure the options: Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update. Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60 . You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly. Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Manager to check for host updates less frequently than the default. Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot. Use Maintenance Policy sets the cluster's scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option. Click . Review the summary of the hosts and virtual machines that are affected. Click Upgrade . A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process. You can track the progress of host updates: in the Compute Clusters view, the Upgrade Status column displays a progress bar that displays the percentage of completion. in the Compute Hosts view in the Events section of the Notification Drawer ( ). You can track the progress of individual virtual machine migrations in the Status column of the Compute Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines. 2.5.5.5.3.2. Updating Individual Hosts Use the host upgrade manager to update individual hosts directly from the Administration Portal. Note The upgrade manager only checks hosts with a status of Up or Non-operational , but not Maintenance . Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host. Procedure Ensure that the correct repositories are enabled. To view a list of currently enabled repositories, run dnf repolist . For Red Hat Virtualization Hosts: # subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms For Red Hat Enterprise Linux hosts: # subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \ --enable=advanced-virt-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms # subscription-manager release --set=8.6 In the Administration Portal, click Compute Hosts and select the host to be updated. Click Installation Check for Upgrade and click OK . Open the Notification Drawer ( ) and expand the Events section to see the result. If an update is available, click Installation Upgrade . Click OK to update the host. Running virtual machines are migrated according to their migration policy. If migration is disabled for any virtual machines, you are prompted to shut them down. The details of the host are updated in Compute Hosts and the status transitions through these stages: Maintenance > Installing > Reboot > Up Note If the update fails, the host's status changes to Install Failed . From Install Failed you can click Installation Upgrade again. Repeat this procedure for each host in the Red Hat Virtualization environment. Note You should update the hosts from the Administration Portal. However, you can update the hosts using dnf upgrade instead. 2.5.5.5.3.3. Manually Updating Hosts Caution This information is provided for advanced system administrators who need to update hosts manually, but Red Hat does not support this method. The procedure described in this topic does not include important steps, including certificate renewal, assuming advanced knowledge of such information. Red Hat supports updating hosts using the Administration Portal. For details, see Updating individual hosts or Updating all hosts in a cluster in the Administration Guide . You can use the dnf command to update your hosts. Update your systems regularly, to ensure timely application of security and bug fixes. Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host. Procedure Ensure the correct repositories are enabled. You can check which repositories are currently enabled by running dnf repolist . For Red Hat Virtualization Hosts: # subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms For Red Hat Enterprise Linux hosts: # subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \ --enable=advanced-virt-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms # subscription-manager release --set=8.6 In the Administration Portal, click Compute Hosts and select the host to be updated. Click Management Maintenance and OK . For Red Hat Enterprise Linux hosts: Identify the current version of Red Hat Enterprise Linux: # cat /etc/redhat-release Check which version of the redhat-release package is available: # dnf --refresh info --available redhat-release This command shows any available updates. For example, when upgrading from Red Hat Enterprise Linux 8.2. z to 8.3, compare the version of the package with the currently installed version: Available Packages Name : redhat-release Version : 8.3 Release : 1.0.el8 ... Caution The Red Hat Enterprise Linux Advanced Virtualization module is usually released later than the Red Hat Enterprise Linux y-stream. If no new Advanced Virtualization module is available yet, or if there is an error enabling it, stop here and cancel the upgrade. Otherwise you risk corrupting the host. If the Advanced Virtualization stream is available for Red Hat Enterprise Linux 8.3 or later, reset the virt module: # dnf module reset virt Note If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact. You can see the value of the stream by entering: Enable the virt module in the Advanced Virtualization stream with the following command: For RHV 4.4.2: # dnf module enable virt:8.2 For RHV 4.4.3 to 4.4.5: # dnf module enable virt:8.3 For RHV 4.4.6 to 4.4.10: # dnf module enable virt:av For RHV 4.4 and later: Note Starting with RHEL 8.6 the Advanced virtualization packages will use the standard virt:rhel module. For RHEL 8.4 and 8.5, only one Advanced Virtualization stream is used, rhel:av . Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Update the host: # dnf upgrade --nobest Reboot the host to ensure all updates are correctly applied. Note Check the imgbased logs to see if any additional package updates have failed for a Red Hat Virtualization Host. If some packages were not successfully reinstalled after the update, check that the packages are listed in /var/imgbased/persisted-rpms . Add any missing packages then run rpm -Uvh /var/imgbased/persisted-rpms/* . Repeat this process for each host in the Red Hat Virtualization environment. 2.5.5.5.4. Reinstalling Hosts Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Prerequisites If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low. Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance. Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks. Procedure Click Compute Hosts and select the host. Click Management Maintenance and OK . Click Installation Reinstall . This opens the Install Host window. Click OK to reinstall the host. After a host has been reinstalled and its status returns to Up , you can migrate virtual machines back to the host. Important After you register a Red Hat Virtualization Host to the Red Hat Virtualization Manager and reinstall it, the Administration Portal may erroneously display its status as Install Failed . Click Management Activate , and the host will change to an Up status and be ready for use. 2.5.5.6. Viewing Host Errata Errata for each host can be viewed after the host has been configured to receive errata information from the Red Hat Satellite server. For more information on configuring a host to receive errata information see Configuring Satellite Errata Management for a Host Procedure Click Compute Hosts . Click the host's name. This opens the details view. Click the Errata tab. 2.5.5.7. Viewing the Health Status of a Host Hosts have an external health status in addition to their regular Status . The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the host's Name as one of the following icons: OK : No icon Info : Warning : Error : Failure : To view further details about the host's health status, click the host's name. This opens the details view, and click the Events tab. The host's health status can also be viewed using the REST API. A GET request on a host will include the external_status element, which contains the health status. You can set a host's health status in the REST API via the events collection. For more information, see Adding Events in the REST API Guide . 2.5.5.8. Viewing Host Devices You can view the host devices for each host in the Host Devices tab in the details view. If the host has been configured for direct device assignment, these devices can be directly attached to virtual machines for improved performance. For more information on the hardware requirements for direct device assignment, see Additional Hardware Considerations for Using Device Assignment in Hardware Considerations for Implementing SR-IOV . For more information on configuring the host for direct device assignment, see Configuring a Host for PCI Passthrough host tasks . For more information on attaching host devices to virtual machines, see Host Devices in the Virtual Machine Management Guide . Procedure Click Compute Hosts . Click the host's name. This opens the details view. Click Host Devices tab. This tab lists the details of the host devices, including whether the device is attached to a virtual machine, and currently in use by that virtual machine. 2.5.5.9. Accessing Cockpit from the Administration Portal Cockpit is available by default on Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. You can access the Cockpit web interface by typing the address into a browser, or through the Administration Portal. Procedure In the Administration Portal, click Compute Hosts and select a host. Click Host Console . The Cockpit login page opens in a new browser window. 2.5.5.9.1. Setting a Legacy SPICE Cipher SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is: kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine. You can change the cipher string by using an Ansible playbook. Changing the cipher string On the Manager machine, create a file in the directory /usr/share/ovirt-engine/playbooks . For example: # vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml Enter the following in the file and save it: name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption Run the file you just created: # ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy using the --extra-vars option with the variable host_deploy_spice_cipher_string : # ansible-playbook -l hostname \ --extra-vars host_deploy_spice_cipher_string="DEFAULT:-RC4:-3DES:-DES" \ /usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml 2.5.5.10. Configuring Host Power Management Settings Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal. You must configure host power management in order to utilize host high availability and virtual machine high availability. For more information about power management devices, see Power Management in the Technical Reference . Procedure Click Compute Hosts and select a host. Click Management Maintenance , and click OK to confirm. When the host is in maintenance mode, click Edit . Click the Power Management tab. Select the Enable Power Management check box to enable the fields. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump. Important If you enable or disable Kdump integration on an existing host, you must reinstall the host for kdump to be configured. Optionally, select the Disable policy control of power management check box if you do not want your host's power management to be controlled by the Scheduling Policy of the host's cluster . Click the plus ( + ) button to add a new power management device. The Edit fence agent window opens. Enter the User Name and Password of the power management device into the appropriate fields. Select the power management device Type in the drop-down list. Enter the IP address in the Address field. Enter the SSH Port number used by the power management device to communicate with the host. Enter the Slot number used to identify the blade of the power management device. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries. If both IPv4 and IPv6 IP addresses can be used (default), leave the Options field blank. If only IPv4 IP addresses can be used, enter inet4_only=1 . If only IPv6 IP addresses can be used, enter inet6_only=1 . Select the Secure check box to enable the power management device to connect securely to the host. Click Test to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification. Click OK to close the Edit fence agent window. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host's cluster and dc (datacenter) for a fencing proxy. Click OK . Note For IPv6, Red Hat Virtualization supports only static addressing. Dual-stack IPv4 and IPv6 addressing is not supported. The Management Power Management drop-down menu is now enabled in the Administration Portal. 2.5.5.11. Configuring Host Storage Pool Manager Settings The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host's available resources, it is important to prioritize hosts that can afford the resources. The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Procedure Click Compute Hosts . Click Edit . Click the SPM tab. Use the radio buttons to select the appropriate SPM priority for the host. Click OK . 2.5.5.11.1. Migrating a self-hosted engine host to a different cluster You cannot migrate a host that is configured as a self-hosted engine host to a data center or cluster other than the one in which the self-hosted engine virtual machine is running. All self-hosted engine hosts must be in the same data center and cluster. You need to disable the host from being a self-hosted engine host by undeploying the self-hosted engine configuration from the host. Procedure Click Compute Hosts and select the host. Click Management Maintenance . The host's status changes to Maintenance . Under Reinstall , select Hosted Engine UNDEPLOY . Click Reinstall . Tip Alternatively, you can use the REST API undeploy_hosted_engine parameter. Click Edit . Select the target data center and cluster. Click OK . Click Management Activate . Additional resources Moving a Host to Maintenance mode Activating a Host from Maintenance Mode 2.5.6. Explanation of Settings and Controls in the New Host and Edit Host Windows 2.5.6.1. Host General Settings Explained These settings apply when editing the details of a host or adding new Red Hat Enterprise Linux hosts and Satellite host provider hosts. The General settings table contains the information required on the General tab of the New Host or Edit Host window. Table 2.20. General settings Field Name Description Host Cluster The cluster and data center to which the host belongs. Use Foreman/Satellite Select or clear this check box to view or hide options for adding hosts provided by Satellite host providers. The following options are also available: Discovered Hosts Discovered Hosts - A drop-down list that is populated with the name of Satellite hosts discovered by the engine. Host Groups -A drop-down list of host groups available. Compute Resources - A drop-down list of hypervisors to provide compute resources. Provisioned Hosts Providers Hosts - A drop-down list that is populated with the name of hosts provided by the selected external provider. The entries in this list are filtered in accordance with any search queries that have been input in the Provider search filter . Provider search filter - A text field that allows you to search for hosts provided by the selected external provider. This option is provider-specific; see provider documentation for details on forming search queries for specific providers. Leave this field blank to view all available hosts. Name The name of the host. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Comment A field for adding plain text, human-readable comments regarding the host. Hostname The IP address or resolvable host name of the host. If a resolvable hostname is used, you must ensure that all addresses that the hostname is resolved to match the IP addresses, IPv4 and IPv6, used by the management network of the host. Password The password of the host's root user. Set the password when adding the host. The password cannot be edited afterwards. Activate host after install Select this checkbox to activate the host after successful installation. This is enabled by default and required for the hypervisors to be activated successfully. After successful installation, you can clear this checkbox to switch the host status to Maintenance. This allows the administrator to perform additional configuration tasks on the hypervisors. Reboot host after install Select this checkbox to reboot the host after it is installed. This is enabled by default. Note Changing the kernel command line parameters of the host, or changing the firewall type of the cluster also require you to reboot the host. SSH Public Key Copy the contents in the text box to the /root/.ssh/authorized_hosts file on the host to use the Manager's SSH key instead of a password to authenticate with a host. Automatically configure host firewall When adding a new host, the Manager can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter . SSH Fingerprint You can fetch the host's SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter . 2.5.6.2. Host Power Management Settings Explained The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows. You can configure power management if the host has a supported power management card. Table 2.21. Power Management Settings Field Name Description Enable Power Management Enables power management on the host. Select this check box to enable the rest of the fields in the Power Management tab. Kdump integration Prevents the host from fencing while performing a kernel crash dump, so that the crash dump is not interrupted. In Red Hat Enterprise Linux 7.1 and later, kdump is available by default. If kdump is available on the host, but its configuration is not valid (the kdump service cannot be started), enabling Kdump integration will cause the host (re)installation to fail. If you enable or disable Kdump integration on an existing host, you must reinstall the host . Disable policy control of power management Power management is controlled by the Scheduling Policy of the host's cluster . If power management is enabled and the defined low utilization value is reached, the Manager will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Select this check box to disable policy control. Agents by Sequential Order Lists the host's fence agents. Fence agents can be sequential, concurrent, or a mix of both. If fence agents are used sequentially, the primary agent is used first to stop or start a host, and if it fails, the secondary agent is used. If fence agents are used concurrently, both fence agents have to respond to the Stop command for the host to be stopped; if one agent responds to the Start command, the host will go up. Fence agents are sequential by default. Use the up and down buttons to change the sequence in which the fence agents are used. To make two fence agents concurrent, select one fence agent from the Concurrent with drop-down list to the other fence agent. Additional fence agents can be added to the group of concurrent fence agents by selecting the group from the Concurrent with drop-down list to the additional fence agent. Add Fence Agent Click the + button to add a new fence agent. The Edit fence agent window opens. See the table below for more information on the fields in this window. Power Management Proxy Preference By default, specifies that the Manager will search for a fencing proxy within the same cluster as the host, and if no fencing proxy is found, the Manager will search in the same dc (data center). Use the up and down buttons to change the sequence in which these resources are used. This field is available under Advanced Parameters . The following table contains the information required in the Edit fence agent window. Table 2.22. Edit fence agent Settings Field Name Description Address The address to access your host's power management device. Either a resolvable hostname or an IP address. User Name User account with which to access the power management device. You can set up a user on the device, or use the default user. Password Password for the user accessing the power management device. Type The type of power management device in your host. Choose one of the following: apc - APC MasterSwitch network power switch. Not for use with APC 5.x power switch devices. apc_snmp - Use with APC 5.x power switch devices. bladecenter - IBM Bladecenter Remote Supervisor Adapter. cisco_ucs - Cisco Unified Computing System. drac5 - Dell Remote Access Controller for Dell computers. drac7 - Dell Remote Access Controller for Dell computers. eps - ePowerSwitch 8M+ network power switch. hpblade - HP BladeSystem. ilo , ilo2 , ilo3 , ilo4 - HP Integrated Lights-Out. ipmilan - Intelligent Platform Management Interface and Sun Integrated Lights Out Management devices. rsa - IBM Remote Supervisor Adapter. rsb - Fujitsu-Siemens RSB management interface. wti - WTI Network Power Switch. For more information about power management devices, see Power Management in the Technical Reference . Port The port number used by the power management device to communicate with the host. Slot The number used to identify the blade of the power management device. Service Profile The service profile name used to identify the blade of the power management device. This field appears instead of Slot when the device type is cisco_ucs . Options Power management device specific options. Enter these as 'key=value'. See the documentation of your host's power management device for the options available. For Red Hat Enterprise Linux 7 hosts, if you are using cisco_ucs as the power management device, you also need to append ssl_insecure=1 to the Options field. Secure Select this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on the power management agent. 2.5.6.3. SPM Priority Settings Explained The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window. Table 2.23. SPM settings Field Name Description SPM Priority Defines the likelihood that the host will be given the role of Storage Pool Manager (SPM). The options are Low , Normal , and High priority. Low priority means that there is a reduced likelihood of the host being assigned the role of SPM, and High priority means there is an increased likelihood. The default setting is Normal. 2.5.6.4. Host Console Settings Explained The Console settings table details the information required on the Console tab of the New Host or Edit Host window. Table 2.24. Console settings Field Name Description Override display address Select this check box to override the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, the machine returns a public IP or FQDN (which is resolved in the external network to the public IP). Display address The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP. vGPU Placement Specifies the preferred placement of vGPUs: Consolidated : Select this option if you prefer to run more vGPUs on available physical cards. Separated : Select this option if you prefer to run each vGPU on a separate physical card. 2.5.6.5. Network Provider Settings Explained The Network Provider settings table details the information required on the Network Provider tab of the New Host or Edit Host window. Table 2.25. Network Provider settings Field Name Description External Network Provider If you have added an external network provider and want the host's network to be provisioned by the external network provider, select one from the list. 2.5.6.6. Kernel Settings Explained The Kernel settings table details the information required on the Kernel tab of the New Host or Edit Host window. Common kernel boot parameter options are listed as check boxes so you can easily select them. For more complex changes, use the free text entry field to Kernel command line to add in any additional parameters required. If you change any kernel command line parameters, you must reinstall the host . Important If the host is attached to the Manager, you must place the host into maintenance mode before making changes. After making the changes, reinstall the host to apply the changes. Table 2.26. Kernel Settings Field Name Description Hostdev Passthrough & SR-IOV Enables the IOMMU flag in the kernel so a virtual machine can use a host device as if it is attached directly to the virtual machine. The host hardware and firmware must also support IOMMU. The virtualization extension and IOMMU extension must be enabled on the hardware. See Configuring a Host for PCI Passthrough . IBM POWER8 has IOMMU enabled by default. Nested Virtualization Enables the vmx or svm flag so virtual machines can run within virtual machines. This option is a Technology Preview feature: It is intended only for evaluation purposes. It is not supported for production purposes. To use this setting, you must install the vdsm-hook-nestedvt hook on the host. For details, see Enabling nested virtualization for all virtual machines and Enabling nested virtualization for individual virtual machines Unsafe Interrupts If IOMMU is enabled but the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling this option. Note that you should only enable this option if the virtual machines on the host are trusted; having the option enabled potentially exposes the host to MSI attacks from the virtual machines. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes. PCI Reallocation If your SR-IOV NIC is unable to allocate virtual functions because of memory issues, consider enabling this option. The host hardware and firmware must also support PCI reallocation. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes. Blacklist Nouveau Blocks the nouveau driver. Nouveau is a community driver for NVIDIA GPUs that conflicts with vendor-supplied drivers. The nouveau driver should be blocked when vendor drivers take precedence. SMT Disabled Disables Simultaneous Multi Threading (SMT). Disabling SMT can mitigate security vulnerabilities, such as L1TF or MDS. FIPS mode Enables FIPS mode. For details, see Enabling FIPS using the Manager . Kernel command line This field allows you to append more kernel parameters to the default parameters. Note If the kernel boot parameters are grayed out, click the reset button and the options will be available. 2.5.6.7. Hosted Engine Settings Explained The Hosted Engine settings table details the information required on the Hosted Engine tab of the New Host or Edit Host window. Table 2.27. Hosted Engine Settings Field Name Description Choose hosted engine deployment action Three options are available: None - No actions required. Deploy - Select this option to deploy the host as a self-hosted engine node. Undeploy - For a self-hosted engine node, you can select this option to undeploy the host and remove self-hosted engine related configurations. 2.5.7. Host Resilience 2.5.7.1. Host High Availability The Red Hat Virtualization Manager uses fencing to keep hosts in a cluster responsive. A Non Responsive host is different from a Non Operational host. Non Operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. Non Responsive hosts cannot be communicated with by the Manager. Fencing allows a cluster to react to unexpected host failures and enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host's power management device and test their correctness from time to time. In a fencing operation, a non-responsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains non-responsive pending manual intervention and troubleshooting. Note To automatically check the fencing parameters, you can configure the PMHealthCheckEnabled (false by default) and PMHealthCheckIntervalInSec (3600 sec by default) engine-config options. When set to true, PMHealthCheckEnabled will check all host agents at the interval specified by PMHealthCheckIntervalInSec , and raise warnings if it detects issues. See Syntax for the engine-config Command for more information about configuring engine-config options. Power management operations can be performed by Red Hat Virtualization Manager after it reboots, by a proxy host, or manually in the Administration Portal. All the virtual machines running on the non-responsive host are stopped, and highly available virtual machines are started on a different host. At least two hosts are required for power management operations. After the Manager starts up, it automatically attempts to fence non-responsive hosts that have power management enabled after the quiet time (5 minutes by default) has elapsed. The quiet time can be configured by updating the DisableFenceAtStartupInSec engine-config option. Note The DisableFenceAtStartupInSec engine-config option helps prevent a scenario where the Manager attempts to fence hosts while they boot up. This can occur after a data center outage because a host's boot process is normally longer than the Manager boot process. Hosts can be fenced automatically by the proxy host using the power management parameters, or manually by right-clicking on a host and using the options on the menu. Important If a host runs virtual machines that are highly available, power management must be enabled and configured. 2.5.7.2. Power Management by Proxy in Red Hat Virtualization The Red Hat Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy. You can select between: Any host in the same cluster as the host requiring fencing. Any host in the same data center as the host requiring fencing. A viable fencing proxy host has a status of either UP or Maintenance . 2.5.7.3. Setting Fencing Parameters on a Host The parameters for host fencing are set using the Power Management fields on the New Host or Edit Host windows. Power management enables the system to fence a troublesome host using an additional interface such as a Remote Access Card (RAC). All power management operations are done using a proxy host, as opposed to directly by the Red Hat Virtualization Manager. At least two hosts are required for power management operations. Procedure Click Compute Hosts and select the host. Click Edit . Click the Power Management tab. Select the Enable Power Management check box to enable the fields. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump. Important If you enable or disable Kdump integration on an existing host, you must reinstall the host . Optionally, select the Disable policy control of power management check box if you do not want your host's power management to be controlled by the Scheduling Policy of the host's cluster. Click the + button to add a new power management device. The Edit fence agent window opens. Enter the Address , User Name , and Password of the power management device. Select the power management device Type from the drop-down list. Enter the SSH Port number used by the power management device to communicate with the host. Enter the Slot number used to identify the blade of the power management device. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries. Select the Secure check box to enable the power management device to connect securely to the host. Click the Test button to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification. Warning Power management parameters (userid, password, options, etc) are tested by Red Hat Virtualization Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without the corresponding change in Red Hat Virtualization Manager, fencing is likely to fail when most needed. Click OK to close the Edit fence agent window. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host's cluster and dc (datacenter) for a fencing proxy. Click OK . You are returned to the list of hosts. Note that the exclamation mark to the host's name has now disappeared, signifying that power management has been successfully configured. 2.5.7.4. fence_kdump Advanced Configuration kdump Click the name of a host to view the status of the kdump service in the General tab of the details view: Enabled : kdump is configured properly and the kdump service is running. Disabled : the kdump service is not running (in this case kdump integration will not work properly). Unknown : happens only for hosts with an earlier VDSM version that does not report kdump status. For more information on installing and using kdump, see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide . fence_kdump Enabling Kdump integration in the Power Management tab of the New Host or Edit Host window configures a standard fence_kdump setup. If the environment's network configuration is simple and the Manager's FQDN is resolvable on all hosts, the default fence_kdump settings are sufficient for use. However, there are some cases where advanced configuration of fence_kdump is necessary. Environments with more complex networking may require manual changes to the configuration of the Manager, fence_kdump listener, or both. For example, if the Manager's FQDN is not resolvable on all hosts with Kdump integration enabled, you can set a proper host name or IP address using engine-config : engine-config -s FenceKdumpDestinationAddress= A.B.C.D The following example cases may also require configuration changes: The Manager has two NICs, where one of these is public-facing, and the second is the preferred destination for fence_kdump messages. You need to execute the fence_kdump listener on a different IP or port. You need to set a custom interval for fence_kdump notification messages, to prevent possible packet loss. Customized fence_kdump detection settings are recommended for advanced users only, as changes to the default configuration are only necessary in more complex networking setups. 2.5.7.5. fence_kdump listener Configuration Edit the configuration of the fence_kdump listener. This is only necessary in cases where the default configuration is not sufficient. Procedure Create a new file (for example, my-fence-kdump.conf ) in /etc/ovirt-engine/ovirt-fence-kdump-listener.conf.d/ . Enter your customization with the syntax OPTION = value and save the file. Important The edited values must also be changed in engine-config as outlined in the fence_kdump Listener Configuration Options table in Configuring fence-kdump on the Manager . Restart the fence_kdump listener: # systemctl restart ovirt-fence-kdump-listener.service The following options can be customized if required: Table 2.28. fence_kdump Listener Configuration Options Variable Description Default Note LISTENER_ADDRESS Defines the IP address to receive fence_kdump messages on. 0.0.0.0 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationAddress in engine-config . LISTENER_PORT Defines the port to receive fence_kdump messages on. 7410 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationPort in engine-config . HEARTBEAT_INTERVAL Defines the interval in seconds of the listener's heartbeat updates. 30 If the value of this parameter is changed, it must be half the size or smaller than the value of FenceKdumpListenerTimeout in engine-config . SESSION_SYNC_INTERVAL Defines the interval in seconds to synchronize the listener's host kdumping sessions in memory to the database. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KdumpStartedTimeout in engine-config . REOPEN_DB_CONNECTION_INTERVAL Defines the interval in seconds to reopen the database connection which was previously unavailable. 30 - KDUMP_FINISHED_TIMEOUT Defines the maximum timeout in seconds after the last received message from kdumping hosts after which the host kdump flow is marked as FINISHED. 60 If the value of this parameter is changed, it must be double the size or higher than the value of FenceKdumpMessageInterval in engine-config . 2.5.7.6. Configuring fence_kdump on the Manager Edit the Manager's kdump configuration. This is only necessary in cases where the default configuration is not sufficient. The current configuration values can be found using: # engine-config -g OPTION Procedure Edit kdump's configuration using the engine-config command: # engine-config -s OPTION = value Important The edited values must also be changed in the fence_kdump listener configuration file as outlined in the Kdump Configuration Options table. See fence_kdump listener configuration . Restart the ovirt-engine service: # systemctl restart ovirt-engine.service Reinstall all hosts with Kdump integration enabled, if required (see the table below). The following options can be configured using engine-config : Table 2.29. Kdump Configuration Options Variable Description Default Note FenceKdumpDestinationAddress Defines the hostname(s) or IP address(es) to send fence_kdump messages to. If empty, the Manager's FQDN is used. Empty string (Manager FQDN is used) If the value of this parameter is changed, it must match the value of LISTENER_ADDRESS in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled. FenceKdumpDestinationPort Defines the port to send fence_kdump messages to. 7410 If the value of this parameter is changed, it must match the value of LISTENER_PORT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled. FenceKdumpMessageInterval Defines the interval in seconds between messages sent by fence_kdump. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KDUMP_FINISHED_TIMEOUT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled. FenceKdumpListenerTimeout Defines the maximum timeout in seconds since the last heartbeat to consider the fence_kdump listener alive. 90 If the value of this parameter is changed, it must be double the size or higher than the value of HEARTBEAT_INTERVAL in the fence_kdump listener configuration file. KdumpStartedTimeout Defines the maximum timeout in seconds to wait until the first message from the kdumping host is received (to detect that host kdump flow has started). 30 If the value of this parameter is changed, it must be double the size or higher than the value of SESSION_SYNC_INTERVAL in the fence_kdump listener configuration file, and FenceKdumpMessageInterval . 2.5.7.7. Soft-Fencing Hosts Hosts can sometimes become non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue. "SSH Soft Fencing" is a process where the Manager attempts to restart VDSM via SSH on non-responsive hosts. If the Manager fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured. Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Manager and the host times out, the following happens: On the first network failure, the status of the host changes to "connecting". The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running virtual machines on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula). If the host does not respond when that interval has elapsed, vdsm restart is executed via SSH. If vdsm restart does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes to Non Responsive and, if power management is configured, fencing is handed off to the external fencing agent. Note Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured. 2.5.7.8. Using Host Power Management Functions When power management has been configured for a host, you can access a number of options from the Administration Portal interface. While each power management device has its own customizable options, they all support the basic options to start, stop, and restart a host. Procedure Click Compute Hosts and select the host. Click the Management drop-down menu and select one of the following Power Management options: Restart : This option stops the host and waits until the host's status changes to Down . When the agent has verified that the host is down, the highly available virtual machines are restarted on another host in the cluster. The agent then restarts this host. When the host is ready for use its status displays as Up . Start : This option starts the host and lets it join a cluster. When it is ready for use its status displays as Up . Stop : This option powers off the host. Before using this option, ensure that the virtual machines running on the host have been migrated to other hosts in the cluster. Otherwise the virtual machines will crash and only the highly available virtual machines will be restarted on another host. When the host has been stopped its status displays as Non-Operational . Note If Power Management is not enabled, you can restart or stop the host by selecting it, clicking the Management drop-down menu, and selecting an SSH Management option, Restart or Stop . Important When two fencing agents are defined on a host, they can be used concurrently or sequentially. For concurrent agents, both agents have to respond to the Stop command for the host to be stopped; and when one agent responds to the Start command, the host will go up. For sequential agents, to start or stop a host, the primary agent is used first; if it fails, the secondary agent is used. Click OK . Additional resources Configuring ACPI for use with integrated fence devices 2.5.7.9. Manually Fencing or Isolating a Non-Responsive Host If a host unpredictably goes into a non-responsive state, for example, due to a hardware failure, it can significantly affect the performance of the environment. If you do not have a power management device, or if it is incorrectly configured, you can reboot the host manually. Warning Do not select Confirm 'Host has been Rebooted' unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption. Procedure In the Administration Portal, click Compute Hosts and confirm the host's status is Non Responsive . Manually reboot the host. This could mean physically entering the lab and rebooting the host. In the Administration Portal, select the host and click More Actions ( ), then click Confirm 'Host has been Rebooted' . Select the Approve Operation check box and click OK . If your hosts take an unusually long time to boot, you can set ServerRebootTimeout to specify how many seconds to wait before determining that the host is Non Responsive : # engine-config --set ServerRebootTimeout= integer
|
[
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on",
"vi /etc/default/grub ... GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on ...",
"vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"reboot",
"dnf install vdsm-hook-nestedvt",
"cat /sys/module/kvm*/parameters/nested",
"cat /sys/module/kvm*/parameters/nested",
"vi /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml.example --- # Any additional tasks required to be executing during host deploy process can be added below # - name: Enable additional port on firewalld firewalld: port: \" 12345/tcp \" permanent: yes immediate: yes state: enabled",
"subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms --enable=advanced-virt-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms subscription-manager release --set=8.6",
"subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms --enable=advanced-virt-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms subscription-manager release --set=8.6",
"cat /etc/redhat-release",
"dnf --refresh info --available redhat-release",
"Available Packages Name : redhat-release Version : 8.3 Release : 1.0.el8 ...",
"dnf module reset virt",
"dnf module list virt",
"dnf module enable virt:8.2",
"dnf module enable virt:8.3",
"dnf module enable virt:av",
"dnf module enable virt:rhel",
"dnf module -y enable nodejs:14",
"dnf upgrade --nobest",
"vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml",
"name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption",
"ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml",
"ansible-playbook -l hostname --extra-vars host_deploy_spice_cipher_string=\"DEFAULT:-RC4:-3DES:-DES\" /usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml",
"engine-config -s FenceKdumpDestinationAddress= A.B.C.D",
"systemctl restart ovirt-fence-kdump-listener.service",
"engine-config -g OPTION",
"engine-config -s OPTION = value",
"systemctl restart ovirt-engine.service",
"engine-config --set ServerRebootTimeout= integer"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-hosts
|
Chapter 3. Restoring images for overcloud nodes
|
Chapter 3. Restoring images for overcloud nodes The director requires the latest disk images for provisioning new overcloud nodes. Follow this procedure to restore these images. Procedure Source the stackrc file to enable the director's command line tools: Install the rhosp-director-images and rhosp-director-images-ipa packages: Extract the images archives to the images directory in the stack user's home ( /home/stack/images ): Import these images into the director: Configure nodes in your environment to use the new images:
|
[
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images rhosp-director-images-ipa",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.0.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.0.0.tar; do tar -xvf USDi; done",
"(undercloud) [stack@director images]USD cd ~/images (undercloud) [stack@director images]USD openstack overcloud image upload --image-path /home/stack/images/",
"(undercloud) [stack@director images]USD for NODE in USD(openstack baremetal node list -c UUID -f value) ; do openstack overcloud node configure USDNODE ; done"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/back_up_and_restore_the_director_undercloud/restoring-images-for-overcloud-nodes
|
Chapter 94. Mail Microsoft Oauth
|
Chapter 94. Mail Microsoft Oauth Since Camel 3.18.4 . The Mail Microsoft OAuth2 provides an implementation of org.apache.camel.component.mail.MailAuthenticator to authenticate IMAP/POP/SMTP connections and access to Email via Spring's Mail support and the underlying JavaMail system. 94.1. Dependencies Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mail-microsoft-oauth</artifactId> </dependency> Importing camel-mail-microsoft-oauth will automatically import camel-mail component. 94.2. Microsoft Exchange Online OAuth2 Mail Authenticator IMAP sample To use OAuth, an application must be registered with Azure Active Directory. Follow the instructions to register a new application. Procedure Enable the application to access Exchange mailboxes via client credentials flow. For more information, see Authenticate an IMAP, POP or SMTP connection using OAuth Once everything is set up, declare and register in the registry, an instance of org.apache.camel.component.mail.MicrosoftExchangeOnlineOAuth2MailAuthenticator . For Example, in a Spring Boot application: @BindToRegistry("auth") public MicrosoftExchangeOnlineOAuth2MailAuthenticator exchangeAuthenticator(){ return new MicrosoftExchangeOnlineOAuth2MailAuthenticator(tenantId, clientId, clientSecret, "[email protected]"); } Then reference it in the Camel URI as follows: from("imaps://outlook.office365.com:993" + "?authenticator=#auth" + "&mail.imaps.auth.mechanisms=XOAUTH2" + "&debugMode=true" + "&delete=false")
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mail-microsoft-oauth</artifactId> </dependency>",
"@BindToRegistry(\"auth\") public MicrosoftExchangeOnlineOAuth2MailAuthenticator exchangeAuthenticator(){ return new MicrosoftExchangeOnlineOAuth2MailAuthenticator(tenantId, clientId, clientSecret, \"[email protected]\"); }",
"from(\"imaps://outlook.office365.com:993\" + \"?authenticator=#auth\" + \"&mail.imaps.auth.mechanisms=XOAUTH2\" + \"&debugMode=true\" + \"&delete=false\")"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mail-microsoft-oauth-component-starter
|
Integrating Applications with Kamelets
|
Integrating Applications with Kamelets Red Hat build of Apache Camel K 1.10.7 Configuring connectors to simplify application integration Red Hat build of Apache Camel K Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/integrating_applications_with_kamelets/index
|
Chapter 5. Erasure code pools overview
|
Chapter 5. Erasure code pools overview Ceph uses replicated pools by default, meaning that Ceph copies every object from a primary OSD node to one or more secondary OSDs. The erasure-coded pools reduce the amount of disk space required to ensure data durability but it is computationally a bit more expensive than replication. Ceph storage strategies involve defining data durability requirements. Data durability means the ability to sustain the loss of one or more OSDs without losing data. Ceph stores data in pools and there are two types of the pools: replicated erasure-coded Erasure coding is a method of storing an object in the Ceph storage cluster durably where the erasure code algorithm breaks the object into data chunks ( k ) and coding chunks ( m ), and stores those chunks in different OSDs. In the event of the failure of an OSD, Ceph retrieves the remaining data ( k ) and coding ( m ) chunks from the other OSDs and the erasure code algorithm restores the object from those chunks. Note Red Hat recommends min_size for erasure-coded pools to be K+1 or more to prevent loss of writes and data. Erasure coding uses storage capacity more efficiently than replication. The n-replication approach maintains n copies of an object (3x by default in Ceph), whereas erasure coding maintains only k + m chunks. For example, 3 data and 2 coding chunks use 1.5x the storage space of the original object. While erasure coding uses less storage overhead than replication, the erasure code algorithm uses more RAM and CPU than replication when it accesses or recovers objects. Erasure coding is advantageous when data storage must be durable and fault tolerant, but do not require fast read performance (for example, cold storage, historical records, and so on). For the mathematical and detailed explanation on how erasure code works in Ceph, see the Ceph Erasure Coding section in the Architecture Guide for Red Hat Ceph Storage 7. Ceph creates a default erasure code profile when initializing a cluster with k=2 and m=2 , This mean that Ceph will spread the object data over three OSDs ( k+m == 4 ) and Ceph can lose one of those OSDs without losing data. To know more about erasure code profiling see the Erasure Code Profiles section. Important Configure only the .rgw.buckets pool as erasure-coded and all other Ceph Object Gateway pools as replicated, otherwise an attempt to create a new bucket fails with the following error: The reason for this is that erasure-coded pools do not support the omap operations and certain Ceph Object Gateway metadata pools require the omap support. 5.1. Creating a sample erasure-coded pool Create an erasure-coded pool and specify the placement groups. The ceph osd pool create command creates an erasure-coded pool with the default profile, unless another profile is specified. Profiles define the redundancy of data by setting two parameters, k , and m . These parameters define the number of chunks a piece of data is split and the number of coding chunks are created. The simplest erasure coded pool is equivalent to RAID5 and requires at least four hosts. You can create an erasure-coded pool with 2+2 profile. Procedure Set the following configuration for an erasure-coded pool on four nodes with 2+2 configuration. Syntax Important This is not needed for an erasure-coded pool in general. Important The async recovery cost is the number of PG log entries behind on the replica and the number of missing objects. The osd_target_pg_log_entries_per_osd is 30000 . Hence, an OSD with a single PG could have 30000 entries. Since the osd_async_recovery_min_cost is a 64-bit integer, set the value of osd_async_recovery_min_cost to 1099511627776 for an EC pool with 2+2 configuration. Note For an EC cluster with four nodes, the value of K+M is 2+2. If a node fails completely, it does not recover as four chunks and only three nodes are available. When you set the value of mon_osd_down_out_subtree_limit to host , during a host down scenario, it prevents the OSDs from marked out, so as to prevent the data from re balancing and the waits until the node is up again. For an erasure-coded pool with a 2+2 configuration, set the profile. Syntax Example Create an erasure-coded pool. Example 32 is the number of placement groups. 5.2. Erasure code profiles Ceph defines an erasure-coded pool with a profile . Ceph uses a profile when creating an erasure-coded pool and the associated CRUSH rule. Ceph creates a default erasure code profile when initializing a cluster and it provides the same level of redundancy as two copies in a replicated pool. This default profile defines k=2 and m=2 , meaning Ceph spreads the object data over four OSDs ( k+m=4 ) and Ceph can lose one of those OSDs without losing data. EC2+2 requires a minimum deployment footprint of 4 nodes (5 nodes recommended) and can cope with the temporary loss of 1 OSD node. To display the default profile use the following command: You can create a new profile to improve redundancy without increasing raw storage requirements. For instance, a profile with k=8 and m=4 can sustain the loss of four ( m=4 ) OSDs by distributing an object on 12 ( k+m=12 ) OSDs. Ceph divides the object into 8 chunks and computes 4 coding chunks for recovery. For example, if the object size is 8 MB, each data chunk is 1 MB and each coding chunk has the same size as the data chunk, that is also 1 MB. The object is not lost even if four OSDs fail simultaneously. The most important parameters of the profile are k , m and crush-failure-domain , because they define the storage overhead and the data durability. Important Choosing the correct profile is important because you cannot change the profile after you create the pool. To modify a profile, you must create a new pool with a different profile and migrate the objects from the old pool to the new pool. For instance, if the desired architecture must sustain the loss of two racks with a storage overhead of 40% overhead, the following profile can be defined: The primary OSD will divide the NYAN object into four ( k=4 ) data chunks and create two additional chunks ( m=2 ). The value of m defines how many OSDs can be lost simultaneously without losing any data. The crush-failure-domain=rack will create a CRUSH rule that ensures no two chunks are stored in the same rack. Important Red Hat supports the following jerasure coding values for k , and m : k=8 m=3 k=8 m=4 k=4 m=2 Important If the number of OSDs lost equals the number of coding chunks ( m ), some placement groups in the erasure coding pool will go into incomplete state. If the number of OSDs lost is less than m , no placement groups will go into incomplete state. In either situation, no data loss will occur. If placement groups are in incomplete state, temporarily reducing min_size of an erasure coded pool will allow recovery. 5.2.1. Setting OSD erasure-code-profile To create a new erasure code profile: Syntax Where: directory Description Set the directory name from which the erasure code plug-in is loaded. Type String Required No. Default /usr/lib/ceph/erasure-code plugin Description Use the erasure code plug-in to compute coding chunks and recover missing chunks. See the Erasure Code Plug-ins section for details. Type String Required No. Default jerasure stripe_unit Description The amount of data in a data chunk, per stripe. For example, a profile with 2 data chunks and stripe_unit=4K would put the range 0-4K in chunk 0, 4K-8K in chunk 1, then 8K-12K in chunk 0 again. This should be a multiple of 4K for best performance. The default value is taken from the monitor config option osd_pool_erasure_code_stripe_unit when a pool is created. The stripe_width of a pool using this profile will be the number of data chunks multiplied by this stripe_unit . Type String Required No. Default 4K crush-device-class Description The device class, such as hdd or ssd . Type String Required No Default none , meaning CRUSH uses all devices regardless of class. crush-failure-domain Description The failure domain, such as host or rack . Type String Required No Default host key Description The semantic of the remaining key-value pairs is defined by the erasure code plug-in. Type String Required No. --force Description Override an existing profile by the same name. Type String Required No. 5.2.2. Removing OSD erasure-code-profile To remove an erasure code profile: Syntax If the profile is referenced by a pool, the deletion fails. Warning Removing an erasure code profile using osd erasure-code-profile rm command does not automatically delete the associated CRUSH rule associated with the erasure code profile. Red Hat recommends to manually remove the associated CRUSH rule using ceph osd crush rule remove RULE_NAME command to avoid unexpected behavior. 5.2.3. Getting OSD erasure-code-profile To display an erasure code profile: Syntax 5.2.4. Listing OSD erasure-code-profile To list the names of all erasure code profiles: Syntax 5.3. Erasure Coding with Overwrites By default, erasure coded pools only work with the Ceph Object Gateway, which performs full object writes and appends. Using erasure coded pools with overwrites allows Ceph Block Devices and CephFS store their data in an erasure coded pool: Syntax Example Enabling erasure coded pools with overwrites can only reside in a pool using BlueStore OSDs. Since BlueStore's checksumming is used to detect bit rot or other corruption during deep scrubs. Erasure coded pools do not support omap. To use erasure coded pools with Ceph Block Devices and CephFS, store the data in an erasure coded pool, and the metadata in a replicated pool. For Ceph Block Devices, use the --data-pool option during image creation: Syntax Example If using erasure coded pools for CephFS, then setting the overwrites must be done in a file layout. 5.4. Erasure Code Plugins Ceph supports erasure coding with a plug-in architecture, which means you can create erasure coded pools using different types of algorithms. Ceph supports Jerasure. 5.4.1. Creating a new erasure code profile using jerasure erasure code plugin The jerasure plug-in is the most generic and flexible plug-in. It is also the default for Ceph erasure coded pools. The jerasure plug-in encapsulates the JerasureH library. For detailed information about the parameters, see the jerasure documentation. To create a new erasure code profile using the jerasure plug-in, run the following command: Syntax Where: k Description Each object is split in data-chunks parts, each stored on a different OSD. Type Integer Required Yes. Example 4 m Description Compute coding chunks for each object and store them on different OSDs. The number of coding chunks is also the number of OSDs that can be down without losing data. Type Integer Required Yes. Example 2 technique Description The more flexible technique is reed_sol_van ; it is enough to set k and m . The cauchy_good technique can be faster but you need to choose the packetsize carefully. All of reed_sol_r6_op , liberation , blaum_roth , liber8tion are RAID6 equivalents in the sense that they can only be configured with m=2 . Type String Required No. Valid Settings reed_sol_van reed_sol_r6_op cauchy_orig cauchy_good liberation blaum_roth liber8tion Default reed_sol_van packetsize Description The encoding will be done on packets of bytes size at a time. Choosing the correct packet size is difficult. The jerasure documentation contains extensive information on this topic. Type Integer Required No. Default 2048 crush-root Description The name of the CRUSH bucket used for the first step of the rule. For instance step take default . Type String Required No. Default default crush-failure-domain Description Ensure that no two chunks are in a bucket with the same failure domain. For instance, if the failure domain is host no two chunks will be stored on the same host. It is used to create a rule step such as step chooseleaf host . Type String Required No. Default host directory Description Set the directory name from which the erasure code plug-in is loaded. Type String Required No. Default /usr/lib/ceph/erasure-code --force Description Override an existing profile by the same name. Type String Required No. 5.4.2. Controlling CRUSH Placement The default CRUSH rule provides OSDs that are on different hosts. For instance: needs exactly 8 OSDs, one for each chunk. If the hosts are in two adjacent racks, the first four chunks can be placed in the first rack and the last four in the second rack. Recovering from the loss of a single OSD does not require using bandwidth between the two racks. For instance: creates a rule that selects two CRUSH buckets of type rack and for each of them choose four OSDs, each of them located in a different bucket of type host . The rule can also be created manually for finer control.
|
[
"set_req_state_err err_no=95 resorting to 500",
"ceph config set mon mon_osd_down_out_subtree_limit host ceph config set osd osd_async_recovery_min_cost 1099511627776",
"ceph osd erasure-code-profile set ec22 k=2 m=2 crush-failure-domain=host",
"ceph osd erasure-code-profile set ec22 k=2 m=2 crush-failure-domain=host Pool : ceph osd pool create test-ec-22 erasure ec22",
"ceph osd pool create ecpool 32 32 erasure pool 'ecpool' created echo ABCDEFGHI | rados --pool ecpool put NYAN - rados --pool ecpool get NYAN - ABCDEFGHI",
"ceph osd erasure-code-profile get default k=2 m=2 plugin=jerasure technique=reed_sol_van",
"ceph osd erasure-code-profile set myprofile k=4 m=2 crush-failure-domain=rack ceph osd pool create ecpool 12 12 erasure *myprofile* echo ABCDEFGHIJKL | rados --pool ecpool put NYAN - rados --pool ecpool get NYAN - ABCDEFGHIJKL",
"ceph osd erasure-code-profile set NAME [<directory= DIRECTORY >] [<plugin= PLUGIN >] [<stripe_unit= STRIPE_UNIT >] [<_CRUSH_DEVICE_CLASS_>] [<_CRUSH_FAILURE_DOMAIN_>] [<key=value> ...] [--force]",
"ceph osd erasure-code-profile rm RULE_NAME",
"ceph osd erasure-code-profile get NAME",
"ceph osd erasure-code-profile ls",
"ceph osd pool set ERASURE_CODED_POOL_NAME allow_ec_overwrites true",
"ceph osd pool set ec_pool allow_ec_overwrites true",
"rbd create --size IMAGE_SIZE_M|G|T --data-pool _ERASURE_CODED_POOL_NAME REPLICATED_POOL_NAME / IMAGE_NAME",
"rbd create --size 1G --data-pool ec_pool rep_pool/image01",
"ceph osd erasure-code-profile set NAME plugin=jerasure k= DATA_CHUNKS m= DATA_CHUNKS technique= TECHNIQUE [crush-root= ROOT ] [crush-failure-domain= BUCKET_TYPE ] [directory= DIRECTORY ] [--force]",
"chunk nr 01234567 step 1 _cDD_cDD step 2 cDDD____ step 3 ____cDDD",
"crush-steps='[ [ \"choose\", \"rack\", 2 ], [ \"chooseleaf\", \"host\", 4 ] ]'"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/storage_strategies_guide/erasure-code-pools-overview_strategy
|
11.3.2.2. Basic Postfix Configuration
|
11.3.2.2. Basic Postfix Configuration By default, Postfix does not accept network connections from any host other than the local host. Perform the following steps as root to enable mail delivery for other hosts on the network: Edit the /etc/postfix/main.cf file with a text editor, such as vi . Uncomment the mydomain line by removing the hash mark ( # ), and replace domain.tld with the domain the mail server is servicing, such as example.com . Uncomment the myorigin = USDmydomain line. Uncomment the myhostname line, and replace host.domain.tld with the hostname for the machine. Uncomment the mydestination = USDmyhostname, localhost.USDmydomain line. Uncomment the mynetworks line, and replace 168.100.189.0/28 with a valid network setting for hosts that can connect to the server. Uncomment the inet_interfaces = all line. Restart the postfix service. Once these steps are complete, the host accepts outside emails for delivery. Postfix has a large assortment of configuration options. One of the best ways to learn how to configure Postfix is to read the comments within /etc/postfix/main.cf . Additional resources including information about LDAP and SpamAssassin integration are available online at http://www.postfix.org/ .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-email-mta-postfix-conf
|
Common object reference
|
Common object reference OpenShift Container Platform 4.12 Reference guide common API objects Red Hat OpenShift Documentation Team
|
[
"<quantity> ::= <signedNumber><suffix>",
"(Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)",
"(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)",
"(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.Object `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.RawExtension `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"{ \"kind\":\"MyAPIObject\", \"apiVersion\":\"v1\", \"myPlugin\": { \"kind\":\"PluginA\", \"aOption\":\"foo\", }, }"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/common_object_reference/index
|
2.3. Clusters
|
2.3. Clusters 2.3.1. Introduction to Clusters A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models. Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the cluster and settings on the virtual machines. The cluster is the highest level at which power and load-sharing policies can be defined. The number of hosts and number of virtual machines that belong to a cluster are displayed in the results list under Host Count and VM Count , respectively. Clusters run virtual machines or Red Hat Gluster Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together. Red Hat Virtualization creates a default cluster in the default data center during installation. Figure 2.2. Cluster 2.3.2. Cluster Tasks Note Some cluster options do not apply to Gluster clusters. For more information about using Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . 2.3.2.1. Creating a New Cluster A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must have the same CPU architecture. To optimize your CPU types, create your hosts before you create your cluster. After creating the cluster, you can configure the hosts using the Guide Me button. Procedure Click Compute Clusters . Click New . Select the Data Center the cluster will belong to from the drop-down list. Enter the Name and Description of the cluster. Select a network from the Management Network drop-down list to assign the management network role. Select the CPU Architecture . For CPU Type , select the oldest CPU processor family among the hosts that will be part of this cluster. The CPU types are listed in order from the oldest to newest. Important A hosts whose CPU processor family is older than the one you specify with CPU Type cannot be part of this cluster. For details, see Which CPU family should a RHEV3 or RHV4 cluster be set to? . Select the FIPS Mode of the cluster from the drop-down list. Select the Compatibility Version of the cluster from the drop-down list. Select the Switch Type from the drop-down list. Select the Firewall Type for hosts in the cluster, either Firewalld (default) or iptables . Note iptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld Select either the Enable Virt Service or Enable Gluster Service check box to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes. Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Manager, allowing the administrator to provide an explanation for the maintenance. Optionally select the Enable to set Host maintenance reason check box to enable an optional reason field when a host is placed into maintenance mode from the Manager, allowing the administrator to provide an explanation for the maintenance. Optionally select the /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster will use. The /dev/urandom source (Linux-provided device) is enabled by default. Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster. Click the Migration Policy tab to define the virtual machine migration policy for the cluster. Click the Scheduling Policy tab to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and select a serial number policy. Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster. Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options. Click the MAC Address Pool tab to specify a MAC address pool other than the default pool for the cluster. For more options on creating, editing, or removing MAC address pools, see MAC Address Pools . Click OK to create the cluster and open the Cluster - Guide Me window. The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button. Configuration can be resumed by selecting the cluster and clicking More Actions ( ), then clicking Guide Me . 2.3.2.2. General Cluster Settings Explained The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK , prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values. Table 2.4. General Cluster Settings Field Description/Action Data Center The data center that will contain the cluster. The data center must be created before adding a cluster. Name The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Description / Comment The description of the cluster or additional notes. These fields are recommended but not mandatory. Management Network The logical network that will be assigned the management network role. The default is ovirtmgmt . This network will also be used for migrating virtual machines if the migration network is not properly attached to the source or the destination hosts. On existing clusters, the management network can only be changed using the Manage Networks button in the Logical Networks tab in the details view. CPU Architecture The CPU architecture of the cluster. All hosts in a cluster must run the architecture you specify. Different CPU types are available depending on which CPU architecture is selected. undefined : All other CPU types. x86_64 : For Intel and AMD CPU types. ppc64 : For IBM POWER CPU types. CPU Type The oldest CPU family in the cluster. For a list of CPU types, see CPU Requirements in the Planning and Prerequisites Guide . You cannot change this after creating the cluster without significant disruption. Set CPU type to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. Chipset/Firmware Type This setting is only available if the CPU Architecture of the cluster is set to x86_64 . This setting specifies the chipset and firmware type. Options are: Auto Detect : This setting automatically detects the chipset and firmware type. When Auto Detect is selected, the chipset and firmware are determined by the first host up in the cluster. I440FX Chipset with BIOS : Specifies the chipset to I440FX with a firmware type of BIOS. Q35 Chipset with BIOS : Specifies the Q35 chipset with a firmware type of BIOS without UEFI (Default for clusters with compatibility version 4.4). Q35 Chipset with UEFI Specifies the Q35 chipset with a firmware type of BIOS with UEFI. (Default for clusters with compatibility version 4.7) Q35 Chipset with UEFI SecureBoot Specifies the Q35 chipset with a firmware type of UEFI with SecureBoot, which authenticates the digital signatures of the boot loader. For more information, see UEFI and the Q35 chipset in the Administration Guide . Change Existing VMs/Templates from 1440fx to Q35 Chipset with Bios Select this check box to change existing workloads when the cluster's chipset changes from I440FX to Q35. FIPS Mode The FIPS mode used by the cluster. All hosts in the cluster must run the FIPS mode you specify or they will become non-operational. Auto Detect : This setting automatically detects whether FIPS mode is enabled or disabled. When Auto Detect is selected, the FIPS mode is determined by the first host up in the cluster. Disabled : This setting disables FIPS on the cluster. Enabled : This setting enables FIPS on the cluster. Compatibility Version The version of Red Hat Virtualization. You will not be able to select a version earlier than the version specified for the data center. Switch Type The type of switch used by the cluster. Linux Bridge is the standard Red Hat Virtualization switch. OVS provides support for Open vSwitch networking features. Firewall Type Specifies the firewall type for hosts in the cluster, either firewalld (default) or iptables . iptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld . If you change an existing cluster's firewall type, you must reinstall all hosts in the cluster to apply the change. Default Network Provider Specifies the default external network provider that the cluster will use. If you select Open Virtual Network (OVN), the hosts added to the cluster are automatically configured to communicate with the OVN provider. If you change the default network provider, you must reinstall all hosts in the cluster to apply the change. Maximum Log Memory Threshold Specifies the logging threshold for maximum memory consumption as a percentage or as an absolute value in MB. A message is logged if a host's memory usage exceeds the percentage value or if a host's available memory falls below the absolute value in MB. The default is 95% . Enable Virt Service If this check box is selected, hosts in this cluster will be used to run virtual machines. Enable Gluster Service If this check box is selected, hosts in this cluster will be used as Red Hat Gluster Storage Server nodes, and not for running virtual machines. Import existing gluster configuration This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to Red Hat Virtualization Manager. The following options are required for each host in the cluster that is being imported: Hostname : Enter the IP or fully qualified domain name of the Gluster host server. Host ssh public key (PEM) : Red Hat Virtualization Manager fetches the host's SSH public key, to ensure you are connecting with the correct host. Password : Enter the root password required for communicating with the host. Additional Random Number Generator source If the check box is selected, all hosts in the cluster have the additional random number generator device available. This enables passthrough of entropy from the random number generator device to virtual machines. Gluster Tuned Profile This check box is only available if the Enable Gluster Service check box is selected. This option specifies the virtual-host tuning profile to enable more aggressive writeback of dirty memory pages, which benefits the host performance. 2.3.2.3. Optimization Settings Explained Memory Considerations Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your Red Hat Virtualization environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine. CPU Considerations For non-CPU-intensive workloads , you can run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). The following benefits can be achieved: You can run a greater number of virtual machines, which reduces hardware requirements. You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads. For best performance, and especially for CPU-intensive workloads , you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host's hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core. The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows. Table 2.5. Optimization Settings Field Description/Action Memory Optimization None - Disable memory overcommit : Disables memory page sharing. For Server Load - Allow scheduling of 150% of physical memory : Sets the memory page sharing threshold to 150% of the system memory on each host. For Desktop Load - Allow scheduling of 200% of physical memory : Sets the memory page sharing threshold to 200% of the system memory on each host. CPU Threads Selecting the Count Threads As Cores check box enables hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). When this check box is selected, the exposed host threads are treated as cores that virtual machines can use. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores. Memory Balloon Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the Memory Overcommit Manager (MoM) starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine includes a balloon device unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up . If necessary, you can manually update the balloon policy on a host without having to change the status. See Updating the MoM Policy on Hosts in a Cluster . It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution. KSM control Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. 2.3.2.4. Migration Policy Settings Explained A migration policy defines the conditions for live migrating virtual machines in the event of host failure. These conditions include the downtime of the virtual machine during migration, network bandwidth, and how the virtual machines are prioritized. Table 2.6. Migration Policies Explained Policy Description Cluster default (Minimal downtime) Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled. Minimal downtime A policy that lets virtual machines migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if the virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. Post-copy migration When used, post-copy migration pauses the migrating virtual machine vCPUs on the source host, transfers only a minimum of memory pages, activates the virtual machine vCPUs on the destination host, and transfers the remaining memory pages while the virtual machine is running on the destination. The post-copy policy first tries pre-copy to verify whether convergence can occur. The migration switches to post-copy if the virtual machine migration does not converge after a long time. This significantly reduces the downtime of the migrated virtual machine, and also guarantees that the migration finishes regardless of how rapidly the memory pages of the source virtual machine change. It is optimal for migrating virtual machines in heavy continuous use, which would not be possible to migrate with standard pre-copy migration. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. Warning If the network connection breaks prior to the completion of the post-copy process, the Manager pauses and then kills the running virtual machine. Do not use post-copy migration if the virtual machine availability is critical or if the migration network is unstable. Suspend workload if needed A policy that lets virtual machines migrate in most situations, including virtual machines running heavy workloads. Because of this, virtual machines may experience a more significant downtime than with some of the other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. The bandwidth settings define the maximum bandwidth of both outgoing and incoming migrations per host. Table 2.7. Bandwidth Explained Policy Description Auto Bandwidth is copied from the Rate Limit [Mbps] setting in the data center Host Network QoS . If the rate limit has not been defined, it is computed as a minimum of link speeds of sending and receiving network interfaces. If rate limit has not been set, and link speeds are not available, it is determined by local VDSM setting on sending host. Hypervisor default Bandwidth is controlled by local VDSM setting on sending Host. Custom Defined by user (in Mbps). This value is divided by the number of concurrent migrations (default is 2, to account for ingoing and outgoing migration). Therefore, the user-defined bandwidth must be large enough to accommodate all concurrent migrations. For example, if the Custom bandwidth is defined as 600 Mbps, a virtual machine migration's maximum bandwidth is actually 300 Mbps. The resilience policy defines how the virtual machines are prioritized in the migration. Table 2.8. Resilience Policy Settings Field Description/Action Migrate Virtual Machines Migrates all virtual machines in order of their defined priority. Migrate only Highly Available Virtual Machines Migrates only highly available virtual machines to prevent overloading other hosts. Do Not Migrate Virtual Machines Prevents virtual machines from being migrated. Table 2.9. Additional Properties Settings Field Description/Action Enable Migration Encryption Allows the virtual machine to be encrypted during migration. Cluster default Encrypt Don't encrypt Parallel Migrations Allows you to specify whether and how many parallel migration connections to use. Disabled : The virtual machine is migrated using a single, non-parallel connection. Auto : The number of parallel connections is automatically determined. This settings might automatically disable parallel connections. Auto Parallel : The number of parallel connections is automatically determined. Custom : Allows you to specify the preferred number of parallel Connections, the actual number may be lower. Number of VM Migration Connections This setting is only available when Custom is selected. The preferred number of custom parallel migrations, between 2 and 255. 2.3.2.5. Scheduling Policy Settings Explained Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information. Table 2.10. Scheduling Policy Tab Properties Field Description/Action Select Policy Select a policy from the drop-down list. none : Disables load-balancing or power-sharing between hosts for already-running virtual machines. This is the default mode. When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . evenly_distributed : Distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , VCpuToPhysicalCpuRatio , or MaxFreeMemoryForOverUtilized . cluster_maintenance : Limits activity in a cluster during maintenance tasks. No new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate. power_saving : Distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value. vm_evenly_distributed : Distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold . Properties The following properties appear depending on the selected policy. Edit them if necessary: HighVmCount : Sets the minimum number of virtual machines that must be running per host to enable load balancing. The default value is 10 running virtual machines on one host. Load balancing is only enabled when there is at least one host in the cluster that has at least HighVmCount running virtual machines. MigrationThreshold : Defines a buffer before virtual machines are migrated from the host. It is the maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside the migration threshold. The default value is 5 . SpmVmGrace : Defines the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines the SPM host can run in comparison to other hosts. The default value is 5 . CpuOverCommitDurationMinutes : Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action. The defined time interval protects against temporary spikes in CPU load activating scheduling policies and instigating unnecessary virtual machine migration. Maximum two characters. The default value is 2 . HighUtilization : Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, the Red Hat Virtualization Manager migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold. The default value is 80 . LowUtilization : Expressed as a percentage. If the host runs with CPU usage below the low utilization value for the defined time interval, the Red Hat Virtualization Manager will migrate virtual machines to other hosts in the cluster. The Manager will power down the original host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. The default value is 20 . ScaleDown : Reduces the impact of the HA Reservation weight function, by dividing a host's score by the specified amount. This is an optional property that can be added to any policy, including none . HostsInReserve : Specifies a number of hosts to keep running even though there are no running virtual machines on them. This is an optional property that can be added to the power_saving policy. EnableAutomaticHostPowerManagement : Enables automatic power management for all hosts in the cluster. This is an optional property that can be added to the power_saving policy. The default value is true . MaxFreeMemoryForOverUtilized : Specifies the minimum amount of free memory a host should have, in MB. If a host has less free memory than this amount, the RHV Manager considers the host overutilized. For example, if you set this property to 1000 , a host that has less than 1 GB of free memory is overutilized. For details on how this property interacts with the power_saving and evenly_distributed policies, see MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties . You can add this property to the power_saving and evenly_distributed policies. Although it appears among the list of properties for the vm_evenly_distributed policy, it does not apply to that policy. MinFreeMemoryForUnderUtilized : Specifies the maximum amount of free memory a host should have, in MB. If a host has more free memory than this amount, the RHV Manager scheduler considers the host underutilized. For example, if you set this parameter to 10000 , a host that has more than 10 GB of free memory is underutilized. For details on how this property interacts with the power_saving and evenly_distributed policies, see MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties . You can add this property to the power_saving and evenly_distributed policies. Although it appears among the list of properties for the vm_evenly_distributed policy, it does not apply to that policy. HeSparesCount : Sets the number of additional self-hosted engine nodes that must reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. Other virtual machines are prevented from starting on a self-hosted engine node if doing so would not leave enough free memory for the Manager virtual machine. This is an optional property that can be added to the power_saving , vm_evenly_distributed , and evenly_distributed policies. The default value is 0 . Scheduler Optimization Optimize scheduling for host weighing/ordering. Optimize for Utilization : Includes weight modules in scheduling to allow best selection. Optimize for Speed : Skips host weighting in cases where there are more than ten pending requests. Enable Trusted Service Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details. IMPORTANT : OpenAttestation and Intel Trusted Execution Technology (Intel TXT) are no longer available. Enable HA Reservation Enable the Manager to monitor cluster capacity for highly available virtual machines. The Manager ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly. Serial Number Policy Configure the policy for assigning serial numbers to each new virtual machine in the cluster: System Default : Use the system-wide defaults in the Manager database. To configure these defaults, use the engine configuration tool to set the values of the DefaultSerialNumberPolicy and DefaultCustomSerialNumber . These key-value pairs are saved in the vdc_options table of the Manager database. For DefaultSerialNumberPolicy : Default value: HOST_ID Possible values: HOST_ID , VM_ID , CUSTOM Command line example: engine-config --set DefaultSerialNumberPolicy=VM_ID Important: Restart the Manager to apply the configuration. For DefaultCustomSerialNumber : Default value: Dummy serial number Possible values: Any string (max length 255 characters) Command line example: engine-config --set DefaultCustomSerialNumber="My very special string value" Important: Restart the Manager to apply the configuration. Host ID : Set each new virtual machine's serial number to the UUID of the host. Vm ID : Set each new virtual machine's serial number to the UUID of the virtual machine. Custom serial number : Set each new virtual machine's serial number to the value you specify in the following Custom Serial Number parameter. Custom Serial Number Specify the custom serial number to apply to new virtual machines in the cluster. When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log . /var/log/vdsm/mom.log is the Memory Overcommit Manager log file. 2.3.2.6. MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties The scheduler has a background process that migrates virtual machines according to the current cluster scheduling policy and its parameters. Based on the various criteria and their relative weights in a policy, the scheduler continuously categorizes hosts as source hosts or destination hosts and migrates individual virtual machines from the former to the latter. The following description explains how the evenly_distributed and power_saving cluster scheduling policies interact with the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties. Although both policies consider CPU and memory load, CPU load is not relevant for the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties. If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the evenly_distributed policy: Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts. Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become destination hosts. If MaxFreeMemoryForOverUtilized is not defined, the scheduler does not migrate virtual machines based on the memory load. (It continues migrating virtual machines based on the policy's other criteria, such as CPU load.) If MinFreeMemoryForUnderUtilized is not defined, the scheduler considers all hosts eligible to become destination hosts. If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the power_saving policy: Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts. Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become source hosts. Hosts that have more free memory than MaxFreeMemoryForOverUtilized are not overutilized and become destination hosts. Hosts that have less free memory than MinFreeMemoryForUnderUtilized are not underutilized and become destination hosts. The scheduler prefers migrating virtual machines to hosts that are neither overutilized nor underutilized. If there are not enough of these hosts, the scheduler can migrate virtual machines to underutilized hosts. If the underutilized hosts are not needed for this purpose, the scheduler can power them down. If MaxFreeMemoryForOverUtilized is not defined, no hosts are overutilized. Therefore, only underutilized hosts are source hosts, and destination hosts include all hosts in the cluster. If MinFreeMemoryForUnderUtilized is not defined, only overutilized hosts are source hosts, and hosts that are not overutilized are destination hosts. To prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine. If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered. In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio . Additional resources Cluster Scheduling Policy Settings 2.3.2.7. Cluster Console Settings Explained The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows. Table 2.11. Console Settings Field Description/Action Define SPICE Proxy for Cluster Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the VM Portal) is outside of the network where the hypervisors reside. Overridden SPICE proxy address The proxy by which the SPICE client connects to virtual machines. The address must be in the following format: 2.3.2.8. Fencing Policy Settings Explained The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows. Table 2.12. Fencing Policy Settings Field Description/Action Enable fencing Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere. Skip fencing if host has live lease on storage If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced. Skip fencing on cluster connectivity issues If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold . The Threshold value is selected from the drop-down list; available values are 25 , 50 , 75 , and 100 . Skip fencing if gluster bricks are up This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and can be reached from other peers. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. Skip fencing if gluster quorum not met This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and shutting down the host will cause loss of quorum. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. 2.3.2.9. Setting Load and Power Management Policies for Hosts in a Cluster The evenly_distributed and power_saving scheduling policies allow you to specify acceptable memory and CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each scheduling policy, see Cluster Scheduling Policy Settings . Procedure Click Compute Clusters and select a cluster. Click Edit . Click the Scheduling Policy tab. Select one of the following policies: none vm_evenly_distributed Set the minimum number of virtual machines that must be running on at least one host to enable load balancing in the HighVmCount field. Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field. Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field. Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information. evenly_distributed Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field. Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information. Optionally, to prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine. If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered. In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio . power_saving Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field. Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field. Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information. Choose one of the following as the Scheduler Optimization for the cluster: Select Optimize for Utilization to include weight modules in scheduling to allow best selection. Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests. If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box. OpenAttestation and Intel Trusted Execution Technology (Intel TXT) are no longer available. Optionally select the Enable HA Reservation check box to enable the Manager to monitor cluster capacity for highly available virtual machines. Optionally select a Serial Number Policy for the virtual machines in the cluster: System Default : Use the system-wide defaults, which are configured in the Manager database using the engine configuration tool and the DefaultSerialNumberPolicy and DefaultCustomSerialNumber key names. The default value for DefaultSerialNumberPolicy is to use the Host ID. See Scheduling Policies in the Administration Guide for more information. Host ID : Set each virtual machine's serial number to the UUID of the host. Vm ID : Set each virtual machine's serial number to the UUID of the virtual machine. Custom serial number : Set each virtual machine's serial number to the value you specify in the following Custom Serial Number parameter. Click OK . 2.3.2.10. Updating the MoM Policy on Hosts in a Cluster The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions for a cluster pass to hosts the time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up . The following procedure must be performed on each host individually. Procedure Click Compute Clusters . Click the cluster's name. This opens the details view. Click the Hosts tab and select the host that requires an updated MoM policy. Click Sync MoM Policy . The MoM policy on the host is updated without having to move the host to maintenance mode and back Up . 2.3.2.11. Creating a CPU Profile CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect. This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs. Procedure Click Compute Clusters . Click the cluster's name. This opens the details view. Click the CPU Profiles tab. Click New . Enter a Name and a Description for the CPU profile. Select the quality of service to apply to the CPU profile from the QoS list. Click OK . 2.3.2.12. Removing a CPU Profile Remove an existing CPU profile from your Red Hat Virtualization environment. Procedure Click Compute Clusters . Click the cluster's name. This opens the details view. Click the CPU Profiles tab and select the CPU profile to remove. Click Remove . Click OK . If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default CPU profile. 2.3.2.13. Importing an Existing Red Hat Gluster Storage Cluster You can import a Red Hat Gluster Storage cluster and all hosts belonging to the cluster into Red Hat Virtualization Manager. When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. Procedure Click Compute Clusters . Click New . Select the Data Center the cluster will belong to. Enter the Name and Description of the cluster. Select the Enable Gluster Service check box and the Import existing gluster configuration check box. The Import existing gluster configuration field is only displayed if the Enable Gluster Service is selected. In the Hostname field, enter the host name or IP address of any server in the cluster. The host SSH Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field. Enter the Password for the server, and click OK . The Add Hosts window opens, and a list of hosts that are a part of the cluster displays. For each host, enter the Name and the Root Password . If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field. Click Apply to set the entered password all hosts. Verify that the fingerprints are valid and submit your changes by clicking OK . The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Red Hat Gluster Storage cluster into Red Hat Virtualization Manager. 2.3.2.14. Explanation of Settings in the Add Hosts Window The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details. Table 2.13. Add Gluster Hosts Settings Field Description Use a common password Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts. Name Enter the name of the host. Hostname/IP This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window. Root Password Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster. Fingerprint The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window. 2.3.2.15. Removing a Cluster Move all hosts out of a cluster before removing it. Note You cannot remove the Default cluster, as it holds the Blank template. You can, however, rename the Default cluster and add it to a new data center. Procedure Click Compute Clusters and select a cluster. Ensure there are no hosts in the cluster. Click Remove . Click OK 2.3.2.16. Memory Optimization To increase the number of virtual machines on a host, you can use memory overcommitment , in which the memory you assign to virtual machines exceeds RAM and relies on swap space. However, there are potential problems with memory overcommitment: Swapping performance - Swap space is slower and consumes more CPU resources than RAM, impacting virtual machine performance. Excessive swapping can lead to CPU thrashing. Out-of-memory (OOM) killer - If the host runs out of swap space, new processes cannot start, and the kernel's OOM killer daemon begins shutting down active processes such as virtual machine guests. To help overcome these shortcomings, you can do the following: Limit memory overcommitment using the Memory Optimization setting and the Memory Overcommit Manager (MoM) . Make the swap space large enough to accommodate the maximum potential demand for virtual memory and have a safety margin remaining. Reduce virtual memory size by enabling memory ballooning and Kernel Same-page Merging (KSM) . 2.3.2.17. Memory Optimization and Memory Overcommitment You can limit the amount of memory overcommitment by selecting one of the Memory Optimization settings: None (0%), 150% , or 200% . Each setting represents a percentage of RAM. For example, with a host that has 64 GB RAM, selecting 150% means you can overcommit memory by an additional 32 GB, for a total of 96 GB in virtual memory. If the host uses 4 GB of that total, the remaining 92 GB are available. You can assign most of that to the virtual machines ( Memory Size on the System tab), but consider leaving some of it unassigned as a safety margin. Sudden spikes in demand for virtual memory can impact performance before the MoM, memory ballooning, and KSM have time to re-optimize virtual memory. To reduce that impact, select a limit that is appropriate for the kinds of applications and workloads you are running: For workloads that produce more incremental growth in demand for memory, select a higher percentage, such as 200% or 150% . For more critical applications or workloads that produce more sudden increases in demand for memory, select a lower percentage, such as 150% or None (0%). Selecting None helps prevent memory overcommitment but allows the MoM, memory balloon devices, and KSM to continue optimizing virtual memory. Important Always test your Memory Optimization settings by stress testing under a wide range of conditions before deploying the configuration to production. To configure the Memory Optimization setting, click the Optimization tab in the New Cluster or Edit Cluster windows. See Cluster Optimization Settings Explained . Additional comments: The Host Statistics views display useful historical information for sizing the overcommitment ratio. The actual memory available cannot be determined in real time because the amount of memory optimization achieved by KSM and memory ballooning changes continuously. When virtual machines reach the virtual memory limit, new apps cannot start. When you plan the number of virtual machines to run on a host, use the maximum virtual memory (physical memory size and the Memory Optimization setting) as a starting point. Do not factor in the smaller virtual memory achieved by memory optimizations such as memory ballooning and KSM. 2.3.2.18. Swap Space and Memory Overcommitment Red Hat provides these recommendations for configuring swap space . When applying these recommendations, follow the guidance to size the swap space as "last effort memory" for a worst-case scenario. Use the physical memory size and Memory Optimization setting as a basis for estimating the total virtual memory size. Exclude any reduction of the virtual memory size from optimization by the MoM, memory ballooning, and KSM. Important To help prevent an OOM condition, make the swap space large enough to handle a worst-case scenario and still have a safety margin available. Always stress-test your configuration under a wide range of conditions before deploying it to production. 2.3.2.19. The Memory Overcommit Manager (MoM) The Memory Overcommit Manager (MoM) does two things: It limits memory overcommitment by applying the Memory Optimization setting to the hosts in a cluster, as described in the preceding section. It optimizes memory by managing the memory ballooning and KSM , as described in the following sections. You do not need to enable or disable MoM. When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log , the Memory Overcommit Manager log file. 2.3.2.20. Memory Ballooning Virtual machines start with the full amount of virtual memory you have assigned to them. As virtual memory usage exceeds RAM, the host relies more on swap space. If enabled, memory ballooning lets virtual machines give up the unused portion of that memory. The freed memory can be reused by other processes and virtual machines on the host. The reduced memory footprint makes swapping less likely and improves performance. The virtio-balloon package that provides the memory balloon device and drivers ships as a loadable kernel module (LKM). By default, it is configured to load automatically. Adding the module to the denyist or unloading it disables ballooning. The memory balloon devices do not coordinate directly with each other; they rely on the host's Memory Overcommit Manager (MoM) process to continuously monitor each virtual machine needs and instruct the balloon device to increase or decrease virtual memory. Performance considerations: Red Hat does not recommend memory ballooning and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools . Use memory ballooning when increasing virtual machine density (economy) is more important than performance. Memory ballooning does not have a significant impact on CPU utilization. (KSM consumes some CPU resources, but consumption remains consistent under pressure.) To enable memory ballooning, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable Memory Balloon Optimization checkbox. This setting enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the MoM starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. See Cluster Optimization Settings Explained . Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Updating the MoM Policy on Hosts in a Cluster . 2.3.2.21. Kernel Same-page Merging (KSM) When a virtual machine runs, it often creates duplicate memory pages for items such as common libraries and high-use data. Furthermore, virtual machines that run similar guest operating systems and applications produce duplicate memory pages in virtual memory. When enabled, Kernel Same-page Merging (KSM) examines the virtual memory on a host, eliminates duplicate memory pages, and shares the remaining memory pages across multiple applications and virtual machines. These shared memory pages are marked copy-on-write; if a virtual machine needs to write changes to the page, it makes a copy first before writing its modifications to that copy. While KSM is enabled, the MoM manages KSM. You do not need to configure or control KSM manually. KSM increases virtual memory performance in two ways. Because a shared memory page is used more frequently, the host is more likely to the store it in cache or main memory, which improves the memory access speed. Additionally, with memory overcommitment, KSM reduces the virtual memory footprint, reducing the likelihood of swapping and improving performance. KSM consumes more CPU resources than memory ballooning. The amount of CPU KSM consumes remains consistent under pressure. Running identical virtual machines and applications on a host provides KSM with more opportunities to merge memory pages than running dissimilar ones. If you run mostly dissimilar virtual machines and applications, the CPU cost of using KSM may offset its benefits. Performance considerations: After the KSM daemon merges large amounts of memory, the kernel memory accounting statistics may eventually contradict each other. If your system has a large amount of free memory, you might improve performance by disabling KSM. Red Hat does not recommend KSM and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools . Use KSM when increasing virtual machine density (economy) is more important than performance. To enable KSM, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable KSM checkbox. This setting enables MoM to run KSM when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. See Cluster Optimization Settings Explained . 2.3.2.22. UEFI and the Q35 chipset The Intel Q35 chipset, the default chipset for new virtual machines, includes support for the Unified Extensible Firmware Interface (UEFI), which replaces legacy BIOS. Alternatively you can configure a virtual machine or cluster to use the legacy Intel i440fx chipset, which does not support UEFI. UEFI provides several advantages over legacy BIOS, including the following: A modern boot loader SecureBoot, which authenticates the digital signatures of the boot loader GUID Partition Table (GPT), which enables disks larger than 2 TB To use UEFI on a virtual machine, you must configure the virtual machine's cluster for 4.4 compatibility or later. Then you can set UEFI for any existing virtual machine, or to be the default BIOS type for new virtual machines in the cluster. The following options are available: Table 2.14. Available BIOS Types BIOS Type Description Q35 Chipset with Legacy BIOS Legacy BIOS without UEFI (Default for clusters with compatibility version 4.4) Q35 Chipset with UEFI BIOS BIOS with UEFI Q35 Chipset with SecureBoot UEFI with SecureBoot, which authenticates the digital signatures of the boot loader Legacy i440fx chipset with legacy BIOS Setting the BIOS type before installing the operating system You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI is not supported after installing an operating system. 2.3.2.23. Configuring a cluster to use the Q35 Chipset and UEFI After upgrading a cluster to Red Hat Virtualization 4.4, all virtual machines in the cluster run the 4.4 version of VDSM. You can configure a cluster's default BIOS type, which determines the default BIOS type of any new virtual machines you create in that cluster. If necessary, you can override the cluster's default BIOS type by specifying a different BIOS type when you create a virtual machine. Procedure In the VM Portal or the Administration Portal, click Compute Clusters . Select a cluster and click Edit . Click General . Define the default BIOS type for new virtual machines in the cluster by clicking the BIOS Type dropdown menu, and selecting one of the following: Legacy Q35 Chipset with Legacy BIOS Q35 Chipset with UEFI BIOS Q35 Chipset with SecureBoot From the Compatibility Version dropdown menu select 4.4 . The Manager checks that all running hosts are compatible with 4.4, and if they are, the Manager uses 4.4 features. If any existing virtual machines in the cluster should use the new BIOS type, configure them to do so. Any new virtual machines in the cluster that are configured to use the BIOS type Cluster default now use the BIOS type you selected. For more information, see Configuring a virtual machine to use the Q35 Chipset and UEFI . Note Because you can change the BIOS type only before installing an operating system, for any existing virtual machines that are configured to use the BIOS type Cluster default , change the BIOS type to the default cluster BIOS type. Otherwise the virtual machine might not boot. Alternatively, you can reinstall the virtual machine's operating system. 2.3.2.24. Configuring a virtual machine to use the Q35 Chipset and UEFI You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI, or from UEFI to legacy BIOS, might prevent the virtual machine from booting. If you change the BIOS type of an existing virtual machine, reinstall the operating system. Warning If the virtual machine's BIOS type is set to Cluster default , changing the BIOS type of the cluster changes the BIOS type of the virtual machine. If the virtual machine has an operating system installed, changing the cluster BIOS type can cause booting the virtual machine to fail. Procedure To configure a virtual machine to use the Q35 chipset and UEFI: In the VM Portal or the Administration Portal click Compute Virtual Machines . Select a virtual machine and click Edit . On the General tab, click Show Advanced Options . Click System Advanced Parameters . Select one of the following from the BIOS Type dropdown menu: Cluster default Q35 Chipset with Legacy BIOS Q35 Chipset with UEFI BIOS Q35 Chipset with SecureBoot Click OK . From the Virtual Machine portal or the Administration Portal, power off the virtual machine. The time you start the virtual machine, it will run with the new BIOS type you selected. 2.3.2.25. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Prerequisites To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Limitations Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection. If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. In a self-hosted engine environment, the Manager virtual machine does not need to be restarted. Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Virtual machines that have not been updated run with the old configuration, and the new configuration could be overwritten if other changes are made to the virtual machine before the reboot. Once you have updated the compatibility version of all clusters and virtual machines in a data center, you can then change the compatibility version of the data center itself.
|
[
"protocol://[host]:[port]"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-clusters
|
Chapter 19. System and Subscription Management
|
Chapter 19. System and Subscription Management cockpit rebased to version 173 The cockpit packages, which provide the Cockpit browser-based administration console, have been upgraded to version 173. This version provides a number of bug fixes and enhancements. Notable changes include: The menu and navigation can now work with mobile browsers. Cockpit now supports alternate Kerberos keytabs for Cockpit's web server, which enables configuration of Single Sign-On (SSO). Automatic setup of Kerberos keytab for Cockpit web server. Automatic configuration of SSO with FreeIPA for Cockpit is possible. Cockpit requests FreeIPA SSL certificate for Cockpit's web server. Cockpit shows available package updates and missing registrations on system front page. A Firewall interface has been added. The flow control to avoid user interface hangs and unbounded memory usage for big file downloads has been added. Terminal issues in Chrome have been fixed. Cockpit now properly localizes numbers, times, and dates. Subscriptions page hang when accessing as a non-administrator user has been fixed. Log in is now localized properly. The check for root privilege availability has been improved to work for FreeIPA administrators as well. (BZ# 1568728 , BZ# 1495543 , BZ# 1442540 , BZ#1541454, BZ#1574630) reposync now by default skips packages whose location falls outside the destination directory Previously, the reposync command did not sanitize paths to packages specified in a remote repository, which was insecure. A security fix for CVE-2018-10897 has changed the default behavior of reposync to not store any packages outside the specified destination directory. To restore the original insecure behavior, use the new --allow-path-traversal option. (BZ#1609302, BZ#1600618) The yum clean all command now prints a disk usage summary When using the yum clean all command, the following hint was always displayed: With this update, the hint has been removed, and yum clean all now prints a disk usage summary for remaining repositories that were not affected by yum clean all (BZ# 1481220 ) The yum versionlock plug-in now displays which packages are blocked when running the yum update command Previously, the yum versionlock plug-in, which is used to lock RPM packages, did not display any information about packages excluded from the update. Consequently, users were not warned that such packages will not be updated when running the yum update command. With this update, yum versionlock has been changed. The plug-in now prints a message about how many package updates are being excluded. In addition, the new status subcommand has been added to the plug-in. The yum versionlock status command prints the list of available package updates blocked by the plug-in. (BZ# 1497351 ) The repotrack command now supports the --repofrompath option The --repofrompath option , which is already supported by the repoquery and repoclosure commands, has been added to the repotrack command. As a result, non-root users can now add custom repositories to track without escalating their privileges. (BZ# 1506205 ) Subscription manager now respects proxy_port settings from rhsm.conf Previously, subscription manager did not respect changes to the default proxy_port configuration from the /etc/rhsm/rhsm.conf file. Consequently, the default value of 3128 was used even after the user had changed the value of proxy_port . With this update, the underlying source code has been fixed, and subscription manager now respects changes to the default proxy_port configuration. However, making any change to the proxy_port value in /etc/rhsm/rhsm.conf requires an selinux policy change. To avoid selinux denials when changing the default proxy_port , run this command for the benefit of the rhsmcertd daemon process: (BZ# 1576423 ) New package: sos-collector sos-collector is a utility that gathers sosreports from multi-node environments. sos-collector facilitates data collection for support cases and it can be run from either a node or from an administrator's local workstation that has network access to the environment. (BZ#1481861)
|
[
"Maybe you want: rm -rf /var/cache/yum",
"semanage port -a -t squid_port_t -p tcp <new_proxy_port>"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new_features_system_and_subscription_management
|
Chapter 42. ip
|
Chapter 42. ip This chapter describes the commands under the ip command. 42.1. ip availability list List IP availability for network Usage: Table 42.1. Command arguments Value Summary -h, --help Show this help message and exit --ip-version <ip-version> List ip availability of given ip version networks (default is 4) --project <project> List ip availability of given project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 42.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 42.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 42.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 42.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 42.2. ip availability show Show network IP availability details Usage: Table 42.6. Positional arguments Value Summary <network> Show ip availability for a specific network (name or ID) Table 42.7. Command arguments Value Summary -h, --help Show this help message and exit Table 42.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 42.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 42.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 42.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack ip availability list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>]",
"openstack ip availability show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/ip
|
Deploying OpenShift Data Foundation on VMware vSphere
|
Deploying OpenShift Data Foundation on VMware vSphere Red Hat OpenShift Data Foundation 4.18 Instructions on deploying OpenShift Data Foundation using VMware vSphere infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on VMware vSphere clusters. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) vSphere clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. To deploy OpenShift Data Foundation, start with the requirements in the Preparing to deploy OpenShift Data Foundation chapter and then follow any one of the below deployment process for your environment: Internal mode Deploy using dynamic storage devices Deploy using local storage devices Deploy standalone Multicloud Object Gateway External mode Deploying OpenShift Data Foundation in external mode Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Verify the rotational flag on your VMDKs before deploying object storage devices (OSDs) on them. For more information, see the knowledgebase article Override device rotational flag in ODF environment . Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Token authentication method . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices . These are not applicable for deployment using dynamic storage devices. 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . Chapter 2. Deploy using dynamic storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by VMware vSphere (disk format: thin) provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Note Both internal and external OpenShift Data Foundation clusters are supported on VMware vSphere. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 2.3.1.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.3.1.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . For VMs on VMware, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab . For more information, see Installing on vSphere . Optional: If you want to use thick-provisioned storage for flexibility, you must create a storage class with zeroedthick or eagerzeroedthick disk format. For information, see VMware vSphere object definition . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to thin-csi . If you have created a storage class with zeroedthick or eagerzeroedthick disk format for thick-provisioned storage, then that storage class is listed in addition to the default, thin-csi storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Spread the worker nodes across three different physical nodes, racks, or failure domains for high availability. Use vCenter anti-affinity to align OpenShift Data Foundation rack labels with physical nodes and racks in the data center to avoid scheduling two worker nodes on the same physical chassis. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of the aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select the Taint nodes checkbox to make selected nodes dedicated for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. Chapter 3. Deploy using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the steps. Installing Local Storage Operator Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating OpenShift Data Foundation cluster on VMware vSphere VMware vSphere supports the following three types of local storage: Virtual machine disk (VMDK) Raw device mapping (RDM) VMDirectPath I/O Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node to use local storage devices on VMware. Ensure that the disk type is SSD, which is the only supported disk type. For VMs on VMware vSphere, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab. For more information, see Installing on vSphere . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Select one of the following: Disks on all nodes to use the available disks that match the selected filters on all nodes. Disks on selected nodes to use the available disks that match the selected filters only on selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes is spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, the flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. Chapter 4. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 4.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw Chapter 5. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. You can deploy the Multicloud Object Gateway component either using dynamic storage devices or using the local storage devices. 5.1. Deploy standalone Multicloud Object Gateway using dynamic storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway After deploying the component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . 5.1.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.1.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) 5.2. Deploy standalone Multicloud Object Gateway using local storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . 5.2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.2.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
|
[
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"oc annotate namespace openshift-storage openshift.io/node-selector="
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_on_vmware_vsphere/index
|
3.3. Logical Volume Backup
|
3.3. Logical Volume Backup Metadata backups and archives are automatically created whenever there is a configuration change for a volume group or logical volume, unless this feature is disabled in the lvm.conf file. By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archive file. How long the metadata archives stored in the /etc/lvm/archive file are kept and how many archive files are kept is determined by parameters you can set in the lvm.conf file. A daily system backup should include the contents of the /etc/lvm directory in the backup. Note that a metadata backup does not back up the user and system data contained in the logical volumes. You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command. You can restore metadata with the vgcfgrestore command. The vgcfgbackup and vgcfgrestore commands are described in Section 4.3.13, "Backing Up Volume Group Metadata" .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/backup
|
Chapter 3. Enabling user-managed encryption for Azure
|
Chapter 3. Enabling user-managed encryption for Azure In OpenShift Container Platform version 4.14, you can install a cluster with a user-managed encryption key in Azure. To enable this feature, you can prepare an Azure DiskEncryptionSet before installation, modify the install-config.yaml file, and then complete the installation. 3.1. Preparing an Azure Disk Encryption Set The OpenShift Container Platform installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in Azure and provide the key to the installer. Procedure Set the following environment variables for the Azure resource group by running the following command: USD export RESOURCEGROUP="<resource_group>" \ 1 LOCATION="<location>" 2 1 Specifies the name of the Azure resource group where you will create the Disk Encryption Set and encryption key. To avoid losing access to your keys after destroying the cluster, you should create the Disk Encryption Set in a different resource group than the resource group where you install the cluster. 2 Specifies the Azure location where you will create the resource group. Set the following environment variables for the Azure Key Vault and Disk Encryption Set by running the following command: USD export KEYVAULT_NAME="<keyvault_name>" \ 1 KEYVAULT_KEY_NAME="<keyvault_key_name>" \ 2 DISK_ENCRYPTION_SET_NAME="<disk_encryption_set_name>" 3 1 Specifies the name of the Azure Key Vault you will create. 2 Specifies the name of the encryption key you will create. 3 Specifies the name of the disk encryption set you will create. Set the environment variable for the ID of your Azure Service Principal by running the following command: USD export CLUSTER_SP_ID="<service_principal_id>" 1 1 Specifies the ID of the service principal you will use for this installation. Enable host-level encryption in Azure by running the following commands: USD az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost" USD az feature show --namespace Microsoft.Compute --name EncryptionAtHost USD az provider register -n Microsoft.Compute Create an Azure Resource Group to hold the disk encryption set and associated resources by running the following command: USD az group create --name USDRESOURCEGROUP --location USDLOCATION Create an Azure key vault by running the following command: USD az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION \ --enable-purge-protection true Create an encryption key in the key vault by running the following command: USD az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME \ --protection software Capture the ID of the key vault by running the following command: USD KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query "[id]" -o tsv) Capture the key URL in the key vault by running the following command: USD KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name \ USDKEYVAULT_KEY_NAME --query "[key.kid]" -o tsv) Create a disk encryption set by running the following command: USD az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g \ USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL Grant the DiskEncryptionSet resource access to the key vault by running the following commands: USD DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[identity.principalId]" -o tsv) USD az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id \ USDDES_IDENTITY --key-permissions wrapkey unwrapkey get Grant the Azure Service Principal permission to read the DiskEncryptionSet by running the following commands: USD DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[id]" -o tsv) USD az role assignment create --assignee USDCLUSTER_SP_ID --role "<reader_role>" \ 1 --scope USDDES_RESOURCE_ID -o jsonc 1 Specifies an Azure role with read permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 3.2. steps Install an OpenShift Container Platform cluster: Install a cluster with customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Install a cluster into an existing VNet on installer-provisioned infrastructure Install a private cluster on installer-provisioned infrastructure Install a cluster into an government region on installer-provisioned infrastructure
|
[
"export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2",
"export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3",
"export CLUSTER_SP_ID=\"<service_principal_id>\" 1",
"az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"",
"az feature show --namespace Microsoft.Compute --name EncryptionAtHost",
"az provider register -n Microsoft.Compute",
"az group create --name USDRESOURCEGROUP --location USDLOCATION",
"az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true",
"az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software",
"KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)",
"KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)",
"az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL",
"DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)",
"az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get",
"DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)",
"az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/enabling-user-managed-encryption-azure
|
Network APIs
|
Network APIs OpenShift Container Platform 4.14 Reference guide for network APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/index
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_bridge/proc-providing-feedback-on-redhat-documentation
|
Python SDK Guide
|
Python SDK Guide Red Hat Virtualization 4.3 Using the Red Hat Virtualization Python SDK Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This guide describes how to install and work with version 3 and version 4 of the Red Hat Virtualization Python software development kit.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/python_sdk_guide/index
|
Red Hat build of OpenTelemetry
|
Red Hat build of OpenTelemetry OpenShift Container Platform 4.14 Configuring and using the Red Hat build of OpenTelemetry in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/index
|
2.4. Enabling Dynamic DNS Updates
|
2.4. Enabling Dynamic DNS Updates AD allows its clients to refresh their DNS records automatically. AD also actively maintains DNS records to make sure they are updated, including timing out (aging) and removing (scavenging) inactive records. DNS scavenging is not enabled by default on the AD side. SSSD allows the Linux system to imitate a Windows client by refreshing its DNS record, which also prevents its record from being marked inactive and removed from the DNS record. When dynamic DNS updates are enabled, the client's DNS record is refreshed: when the identity provider comes online (always) when the Linux system reboots (always) at a specified interval (optional configuration); by default, the AD provider updates the DNS record every 24 hours You can set this behavior to the same interval as the DHCP lease. In this case, the Linux client is renewed after the lease is renewed. DNS updates are sent to the AD server using Kerberos/GSSAPI for DNS (GSS-TSIG). This means that only secure connections need to be enabled. The dynamic DNS configuration is set for each domain. For example: For details on these options, see the sssd-ad (5) man page.
|
[
"[domain/ad.example.com] id_provider = ad auth_provider = ad chpass_provider = ad access_provider = ad ldap_schema = ad dyndns_update = true dyndns_refresh_interval = 43200 dyndns_update_ptr = true dyndns_ttl = 3600"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/sssd-dyndns
|
25.17. Resizing an Online Logical Unit
|
25.17. Resizing an Online Logical Unit In most cases, fully resizing an online logical unit involves two things: resizing the logical unit itself and reflecting the size change in the corresponding multipath device (if multipathing is enabled on the system). To resize the online logical unit, start by modifying the logical unit size through the array management interface of your storage device. This procedure differs with each array; as such, consult your storage array vendor documentation for more information on this. Note In order to resize an online file system, the file system must not reside on a partitioned device. 25.17.1. Resizing Fibre Channel Logical Units After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the updated size. To do this for Fibre Channel logical units, use the following command: Important To re-scan Fibre Channel logical units on a system that uses multipathing, execute the aforementioned command for each sd device (i.e. sd1 , sd2 , and so on) that represents a path for the multipathed logical unit. To determine which devices are paths for a multipath logical unit, use multipath -ll ; then, find the entry that matches the logical unit being resized. It is advisable that you refer to the WWID of each entry to make it easier to find which one matches the logical unit being resized. 25.17.2. Resizing an iSCSI Logical Unit After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the updated size. To do this for iSCSI devices, use the following command: Replace target_name with the name of the target where the device is located. Note You can also re-scan iSCSI logical units using the following command: Replace interface with the corresponding interface name of the resized logical unit (for example, iface0 ). This command performs two operations: It scans for new devices in the same way that the command echo "- - -" > /sys/class/scsi_host/ host /scan does (refer to Section 25.15, "Scanning iSCSI Interconnects" ). It re-scans for new/modified logical units the same way that the command echo 1 > /sys/block/sdX/device/rescan does. Note that this command is the same one used for re-scanning Fibre Channel logical units. 25.17.3. Updating the Size of Your Multipath Device If multipathing is enabled on your system, you will also need to reflect the change in logical unit size to the logical unit's corresponding multipath device ( after resizing the logical unit). This can be done through multipathd . To do so, first ensure that multipathd is running using service multipathd status . Once you've verified that multipathd is operational, run the following command: The multipath_device variable is the corresponding multipath entry of your device in /dev/mapper . Depending on how multipathing is set up on your system, multipath_device can be either of two formats: mpath X , where X is the corresponding entry of your device (for example, mpath0 ) a WWID; for example, 3600508b400105e210000900000490000 To determine which multipath entry corresponds to your resized logical unit, run multipath -ll . This displays a list of all existing multipath entries in the system, along with the major and minor numbers of their corresponding devices. Important Do not use multipathd -k"resize map multipath_device " if there are any commands queued to multipath_device . That is, do not use this command when the no_path_retry parameter (in /etc/multipath.conf ) is set to "queue" , and there are no active paths to the device. For more information about multipathing, refer to the Red Hat Enterprise Linux 7 DM Multipath guide. 25.17.4. Changing the Read/Write State of an Online Logical Unit Certain storage devices provide the user with the ability to change the state of the device from Read/Write (R/W) to Read-Only (RO), and from RO to R/W. This is typically done through a management interface on the storage device. The operating system will not automatically update its view of the state of the device when a change is made. Follow the procedures described in this chapter to make the operating system aware of the change. Run the following command, replacing XYZ with the desired device designator, to determine the operating system's current view of the R/W state of a device: The following command is also available for Red Hat Enterprise Linux 7: When using multipath, refer to the ro or rw field in the second line of output from the multipath -ll command. For example: To change the R/W state, use the following procedure: Procedure 25.14. Change the R/W State To move the device from RO to R/W, see step 2. To move the device from R/W to RO, ensure no further writes will be issued. Do this by stopping the application, or through the use of an appropriate, application-specific action. Ensure that all outstanding write I/Os are complete with the following command: Replace device with the desired designator; for a device mapper multipath, this is the entry for your device in dev/mapper . For example, /dev/mapper/ mpath3 . Use the management interface of the storage device to change the state of the logical unit from R/W to RO, or from RO to R/W. The procedure for this differs with each array. Consult applicable storage array vendor documentation for more information. Perform a re-scan of the device to update the operating system's view of the R/W state of the device. If using a device mapper multipath, perform this re-scan for each path to the device before issuing the command telling multipath to reload its device maps. This process is explained in further detail in Section 25.17.4.1, "Rescanning Logical Units" . 25.17.4.1. Rescanning Logical Units After modifying the online logical unit Read/Write state, as described in Section 25.17.4, "Changing the Read/Write State of an Online Logical Unit" , re-scan the logical unit to ensure the system detects the updated state with the following command: To re-scan logical units on a system that uses multipathing, execute the above command for each sd device that represents a path for the multipathed logical unit. For example, run the command on sd1, sd2 and all other sd devices. To determine which devices are paths for a multipath unit, use multipath -11 , then find the entry that matches the logical unit to be changed. Example 25.15. Use of the multipath -11 Command For example, the multipath -11 above shows the path for the LUN with WWID 36001438005deb4710000500000640000. In this case, enter: 25.17.4.2. Updating the R/W State of a Multipath Device If multipathing is enabled, after rescanning the logical unit, the change in its state will need to be reflected in the logical unit's corresponding multipath drive. Do this by reloading the multipath device maps with the following command: The multipath -11 command can then be used to confirm the change. 25.17.4.3. Documentation Further information can be found in the Red Hat Knowledgebase. To access this, navigate to https://www.redhat.com/wapps/sso/login.html?redirect=https://access.redhat.com/knowledge/ and log in. Then access the article at https://access.redhat.com/kb/docs/DOC-32850 .
|
[
"echo 1 > /sys/block/sd X /device/rescan",
"iscsiadm -m node --targetname target_name -R [5]",
"iscsiadm -m node -R -I interface",
"multipathd -k\"resize map multipath_device \"",
"blockdev --getro /dev/sd XYZ",
"cat /sys/block/sd XYZ /ro 1 = read-only 0 = read-write",
"36001438005deb4710000500000640000 dm-8 GZ,GZ500 [size=20G][features=0][hwhandler=0][ro] \\_ round-robin 0 [prio=200][active] \\_ 6:0:4:1 sdax 67:16 [active][ready] \\_ 6:0:5:1 sday 67:32 [active][ready] \\_ round-robin 0 [prio=40][enabled] \\_ 6:0:6:1 sdaz 67:48 [active][ready] \\_ 6:0:7:1 sdba 67:64 [active][ready]",
"blockdev --flushbufs /dev/ device",
"echo 1 > /sys/block/sd X /device/rescan",
"echo 1 > /sys/block/sd ax /device/rescan # echo 1 > /sys/block/sd ay /device/rescan # echo 1 > /sys/block/sd az /device/rescan # echo 1 > /sys/block/sd ba /device/rescan",
"multipath -r"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/online-iscsi-resizing
|
15.7. Removing a Host from a Self-Hosted Engine Environment
|
15.7. Removing a Host from a Self-Hosted Engine Environment To remove a self-hosted engine node from your environment, place the node into maintenance mode, undeploy the node, and optionally remove it. The node can be managed as a regular host after the HA services have been stopped, and the self-hosted engine configuration files have been removed. Removing a Host from a Self-Hosted Engine Environment In the Administration Portal, click Compute Hosts and select the self-hosted engine node. Click Management Maintenance and click OK . Click Installation Reinstall . Click the Hosted Engine tab and select UNDEPLOY from the drop-down list. This action stops the ovirt-ha-agent and ovirt-ha-broker services and removes the self-hosted engine configuration file. Click OK . Optionally, click Remove to open the Remove Host(s) confirmation window and click OK .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/Removing_a_Host_from_a_Self-Hosted_Engine_Environment
|
Chapter 4. Customizing the Storage service
|
Chapter 4. Customizing the Storage service The heat template collection provided by director already contains the necessary templates and environment files to enable a basic Ceph Storage configuration. Director uses the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file to create a Ceph cluster and integrate it with your overcloud during deployment. This cluster features containerized Ceph Storage nodes. For more information about containerized services in OpenStack, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide. The Red Hat OpenStack director also applies basic, default settings to the deployed Ceph cluster. You must also define any additional configuration in a custom environment file. Procedure Create the file storage-config.yaml in /home/stack/templates/ . In this example, the ~/templates/storage-config.yaml file contains most of the overcloud-related custom settings for your environment. Parameters that you include in the custom environment file override the corresponding default settings from the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml file. Add a parameter_defaults section to ~/templates/storage-config.yaml . This section contains custom settings for your overcloud. For example, to set vxlan as the network type of the networking service ( neutron ), add the following snippet to your custom environment file: If necessary, set the following options under parameter_defaults according to your requirements: Option Description Default value CinderEnableIscsiBackend Enables the iSCSI backend false CinderEnableRbdBackend Enables the Ceph Storage back end true CinderBackupBackend Sets ceph or swift as the back end for volume backups. For more information, see Section 4.4, "Configuring the Backup Service to use Ceph" . ceph NovaEnableRbdBackend Enables Ceph Storage for Nova ephemeral storage true GlanceBackend Defines which back end the Image service should use: rbd (Ceph), swift , or file rbd GnocchiBackend Defines which back end the Telemetry service should use: rbd (Ceph), swift , or file rbd Note You can omit an option from ~/templates/storage-config.yaml if you intend to use the default setting. The contents of your custom environment file change depending on the settings that you apply in the following sections. See Appendix A, Sample environment file: creating a Ceph Storage cluster for a completed example. 4.1. Enabling the Ceph Metadata Server The Ceph Metadata Server (MDS) runs the ceph-mds daemon, which manages metadata related to files stored on CephFS. CephFS can be consumed through NFS. For more information about using CephFS through NFS, see File System Guide and Deploying the Shared File Systems service with CephFS through NFS . Note Red Hat supports deploying Ceph MDS only with the CephFS through NFS back end for the Shared File Systems service. Procedure To enable the Ceph Metadata Server, invoke the following environment file when you create your overcloud: /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml For more information, see Section 7.2, "Initiating overcloud deployment" . For more information about the Ceph Metadata Server, see Configuring Metadata Server Daemons . Note By default, the Ceph Metadata Server is deployed on the Controller node. You can deploy the Ceph Metadata Server on its own dedicated node. For more information, see Section 3.3, "Creating a custom role and flavor for the Ceph MDS service" . 4.2. Enabling the Ceph Object Gateway The Ceph Object Gateway (RGW) provides applications with an interface to object storage capabilities within a Ceph Storage cluster. When you deploy RGW, you can replace the default Object Storage service ( swift ) with Ceph. For more information, see Object Gateway Configuration and Administration Guide . Procedure To enable RGW in your deployment, invoke the following environment file when you create the overcloud: /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml For more information, see Section 7.2, "Initiating overcloud deployment" . By default, Ceph Storage allows 250 placement groups per OSD. When you enable RGW, Ceph Storage creates six additional pools that are required by RGW. The new pools are: .rgw.root default.rgw.control default.rgw.meta default.rgw.log default.rgw.buckets.index default.rgw.buckets.data Note In your deployment, default is replaced with the name of the zone to which the pools belong. Therefore, when you enable RGW, set the default pg_num by using the CephPoolDefaultPgNum parameter to account for the new pools. For more information about how to calculate the number of placement groups for Ceph pools, see Section 5.4, "Assigning custom attributes to different Ceph pools" . The Ceph Object Gateway is a direct replacement for the default Object Storage service. As such, all other services that normally use swift can seamlessly use the Ceph Object Gateway instead without further configuration. For more information, see the Block Storage Backup Guide . 4.3. Configuring Ceph Object Store to use external Ceph Object Gateway Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone). For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide . Procedure Add the following parameter_defaults to a custom environment file, for example, swift-external-params.yaml , and adjust the values to suit your deployment: Note The example code snippet contains parameter values that might differ from values that you use in your environment: The default port where the remote RGW instance listens is 8080 . The port might be different depending on how the external RGW is configured. The swift user created in the overcloud uses the password defined by the SwiftPassword parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the rgw_keystone_admin_password . Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment: Note Director creates the following roles and users in the Identity service by default: rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator rgw_keystone_admin_domain: default rgw_keystone_admin_project: service rgw_keystone_admin_user: swift Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment: Verification Log in to the undercloud as the stack user. Source the overcloudrc file: Verify that the endpoints exist in the Identity service (keystone): Create a test container: Create a configuration file to confirm that you can upload data to the container: Delete the test container: 4.4. Configuring the Backup Service to use Ceph The Block Storage Backup service ( cinder-backup ) is disabled by default. To enable the Block Storage Backup service, complete the following steps: Procedure Invoke the following environment file when you create your overcloud: /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml . 4.5. Configuring multiple bonded interfaces for Ceph nodes Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability. You can then use a bonded interface for each network connection that the node requires. This provides both redundancy and a dedicated connection for each network. The simplest implementation of bonded interfaces involves the use of two bonds, one for each storage network used by the Ceph nodes. These networks are the following: Front-end storage network ( StorageNet ) The Ceph client uses this network to interact with the corresponding Ceph cluster. Back-end storage network ( StorageMgmtNet ) The Ceph cluster uses this network to balance data in accordance with the placement group policy of the cluster. For more information, see Placement Groups (PG) in the in the Red Hat Ceph Architecture Guide . To configure multiple bonded interfaces, you must create a new network interface template, as the director does not provide any sample templates that you can use to deploy multiple bonded NICs. However, the director does provide a template that deploys a single bonded interface. This template is /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml . You can define an additional bonded interface for your additional NICs in this template. Note For more information about creating custom interface templates, Creating Custom Interface Templates in the Advanced Overcloud Customization guide. The following snippet contains the default definition for the single bonded interface defined in the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml file: 1 A single bridge named br-bond holds the bond defined in this template. This line defines the bridge type, namely OVS. 2 The first member of the br-bond bridge is the bonded interface itself, named bond1 . This line defines the bond type of bond1 , which is also OVS. 3 The default bond is named bond1 . 4 The ovs_options entry instructs director to use a specific set of bonding module directives. Those directives are passed through the BondInterfaceOvsOptions , which you can also configure in this file. For more information about configuring bonding module directives, see Section 4.5.1, "Configuring bonding module directives" . 5 The members section of the bond defines which network interfaces are bonded by bond1 . In this example, the bonded interface uses nic2 (set as the primary interface) and nic3 . 6 The br-bond bridge has two other members: a VLAN for both front-end ( StorageNetwork ) and back-end ( StorageMgmtNetwork ) storage networks. 7 The device parameter defines which device a VLAN should use. In this example, both VLANs use the bonded interface, bond1 . With at least two more NICs, you can define an additional bridge and bonded interface. Then, you can move one of the VLANs to the new bonded interface, which increases throughput and reliability for both storage network connections. When you customize the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml file for this purpose, Red Hat recommends that you use Linux bonds ( type: linux_bond ) instead of the default OVS ( type: ovs_bond ). This bond type is more suitable for enterprise production deployments. The following edited snippet defines an additional OVS bridge ( br-bond2 ) which houses a new Linux bond named bond2 . The bond2 interface uses two additional NICs, nic4 and nic5 , and is used solely for back-end storage network traffic: 1 As bond1 and bond2 are both Linux bonds (instead of OVS), they use bonding_options instead of ovs_options to set bonding directives. For more information, see Section 4.5.1, "Configuring bonding module directives" . For the full contents of this customized template, see Appendix B, Sample custom interface template: multiple bonded interfaces . 4.5.1. Configuring bonding module directives After you add and configure the bonded interfaces, use the BondInterfaceOvsOptions parameter to set the directives that you want each bonded interface to use. You can find this information in the parameters: section of the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml file. The following snippet shows the default definition of this parameter (namely, empty): Define the options you need in the default: line. For example, to use 802.3ad (mode 4) and a LACP rate of 1 (fast), use 'mode=4 lacp_rate=1' : For more information about other supported bonding options, see Open vSwitch Bonding Options in the Advanced Overcloud Optimization guide. For the full contents of the customized /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml template, see Appendix B, Sample custom interface template: multiple bonded interfaces .
|
[
"parameter_defaults: NeutronNetworkType: vxlan",
"parameter_defaults: ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftUserTenant: 'service' SwiftPassword: 'choose_a_random_password'",
"rgw_keystone_api_version = 3 rgw_keystone_url = http://<public Keystone endpoint>:5000/ rgw_keystone_accepted_roles = member, Member, admin rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator rgw_keystone_admin_domain = default rgw_keystone_admin_project = service rgw_keystone_admin_user = swift rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters> rgw_keystone_implicit_tenants = true rgw_keystone_revocation_interval = 0 rgw_s3_auth_use_keystone = true rgw_swift_versioning_enabled = true rgw_swift_account_in_url = true",
"openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml",
"source ~/stackrc",
"openstack endpoint list --service object-store +---------+-----------+-------+-------+---------+-----------+---------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +---------+-----------+-------+-------+---------+-----------+---------------+ | 233b7ea32aaf40c1ad782c696128aa0e | regionOne | swift | object-store | True | admin | http://192.168.24.3:8080/v1/AUTH_%(project_id)s | | 4ccde35ac76444d7bb82c5816a97abd8 | regionOne | swift | object-store | True | public | https://192.168.24.2:13808/v1/AUTH_%(project_id)s | | b4ff283f445348639864f560aa2b2b41 | regionOne | swift | object-store | True | internal | http://192.168.24.3:8080/v1/AUTH_%(project_id)s | +---------+-----------+-------+-------+---------+-----------+---------------+",
"openstack container create <testcontainer> +----------------+---------------+------------------------------------+ | account | container | x-trans-id | +----------------+---------------+------------------------------------+ | AUTH_2852da3cf2fc490081114c434d1fc157 | testcontainer | tx6f5253e710a2449b8ef7e-005f2d29e8 | +----------------+---------------+------------------------------------+",
"openstack object create testcontainer undercloud.conf +-----------------+---------------+----------------------------------+ | object | container | etag | +-----------------+---------------+----------------------------------+ | undercloud.conf | testcontainer | 09fcffe126cac1dbac7b89b8fd7a3e4b | +-----------------+---------------+----------------------------------+",
"openstack container delete -r <testcontainer>",
"type: ovs_bridge // 1 name: br-bond members: - type: ovs_bond // 2 name: bond1 // 3 ovs_options: {get_param: BondInterfaceOvsOptions} 4 members: // 5 - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan // 6 device: bond1 // 7 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}",
"type: ovs_bridge name: br-bond members: - type: linux_bond name: bond1 bonding_options : {get_param: BondInterfaceOvsOptions} // 1 members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: ovs_bridge name: br-bond2 members: - type: linux_bond name: bond2 bonding_options : {get_param: BondInterfaceOvsOptions} members: - type: interface name: nic4 primary: true - type: interface name: nic5 - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}",
"BondInterfaceOvsOptions: default: '' description: The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string",
"BondInterfaceOvsOptions: default: 'mode=4 lacp_rate=1' description: The bonding_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_an_overcloud_with_containerized_red_hat_ceph/enable-ceph-overcloud
|
17.4.3. Problems with the X Window System (GUI)
|
17.4.3. Problems with the X Window System (GUI) If you are having trouble getting X (the X Window System) to start, you may not have installed it during your installation. If you want X, you can either install the packages from the Red Hat Enterprise Linux installation media or perform an upgrade. If you elect to upgrade, select the X Window System packages, and choose GNOME, KDE, or both, during the upgrade package selection process. Refer to Section 35.3, "Switching to a Graphical Login" for more detail on installing a desktop environment.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch17s04s03
|
Chapter 2. Managing compute machines with the Machine API
|
Chapter 2. Managing compute machines with the Machine API 2.1. Creating a compute machine set on AWS You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.1.1. Sample YAML for a compute machine set custom resource on AWS The sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) Local Zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the infrastructure ID, role node label, and zone. 3 Specify the role node label to add. 4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 5 Specify the zone name, for example, us-east-1a . 6 Specify the region, for example, us-east-1 . 7 Specify the infrastructure ID and zone. 8 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 2.1.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets. Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.1.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.1.4. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group. EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group. Prerequisites You created a placement group in the AWS console. Note Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 # ... 1 Specify an instance type that supports EFAs . 2 Specify the EFA network interface type. 3 Specify the zone, for example, us-east-1a . 4 Specify the region, for example, us-east-1 . 5 Specify the name of the existing AWS placement group to deploy machines in. Verification In the AWS console, find a machine that the machine set created and verify the following in the machine properties: The placement group field has the value that you specified for the placementGroupName parameter in the machine set. The interface type field indicates that it uses an EFA. 2.1.5. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values. You can also edit an existing machine set to create new machines with your preferred IMDS configuration when the machine set is scaled up. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 2.1.5.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 2.1.6. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 2.1.6.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 2.1.7. Machine sets that deploy machines as Spot Instances You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning. Interruptions can occur when using Spot Instances for the following reasons: The instance price exceeds your maximum price The demand for Spot Instances increases The supply of Spot Instances decreases When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot Instance. 2.1.7.1. Creating Spot Instances by using compute machine sets You can launch a Spot Instance on AWS by adding spotMarketOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotMarketOptions: {} You can optionally set the spotMarketOptions.maxPrice field to limit the cost of the Spot Instance. For example you can set maxPrice: '2.50' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price. Note It is strongly recommended to use the default On-Demand price as the maxPrice value and to not set the maximum price for Spot Instances. 2.1.8. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider. For more information about the supported instance types, see the following NVIDIA documentation: NVIDIA GPU Operator Community support matrix NVIDIA AI Enterprise support matrix Procedure View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.29.4 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.29.4 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.29.4 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.29.4 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.29.4 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.29.4 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h View the machines that exist in the openshift-machine-api namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file and make the following changes to the new MachineSet definition: Replace worker with gpu . This will be the name of the new machine set. Change the instance type of the new MachineSet definition to g4dn , which includes an NVIDIA Tesla T4 GPU. To learn more about AWS g4dn instance types, see Accelerated Computing . USD jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json "g4dn.xlarge" The <output_file.json> file is saved as preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json . Update the following fields in preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json : .metadata.name to a name containing gpu . .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.spec.providerSpec.value.instanceType to g4dn.xlarge . To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json - Example output 10c10 < "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a", --- > "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a", 21c21 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 31c31 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 60c60 < "instanceType": "g4dn.xlarge", --- > "instanceType": "m5.xlarge", Create the GPU-enabled compute machine set from the definition by running the following command: USD oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json Example output machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.1.9. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.2. Creating a compute machine set on Azure You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.2.1. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 2.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.2.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.2.4. Using the Azure Marketplace offering You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700 2.2.5. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.2.6. Machine sets that deploy machines as Spot VMs You can save on costs by creating a compute machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when Azure issues the termination warning. Interruptions can occur when using Spot VMs for the following reasons: The instance price exceeds your maximum price The supply of Spot VMs decreases Azure needs capacity back When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot VM. 2.2.6.1. Creating Spot VMs by using compute machine sets You can launch a Spot VM on Azure by adding spotVMOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotVMOptions: {} You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price. Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice . However, an instance can still be evicted due to capacity restrictions. Note It is strongly recommended to use the default standard VM price as the maxPrice value and to not set the maximum price for Spot VMs. 2.2.7. Machine sets that deploy machines on Ephemeral OS disks You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. Additional resources For more information, see the Microsoft Azure documentation about Ephemeral OS disks for Azure VMs . 2.2.7.1. Creating machines on Ephemeral OS disks by using compute machine sets You can launch machines on Ephemeral OS disks on Azure by editing your compute machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Edit the custom resource (CR) by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the compute machine set that you want to provision machines on Ephemeral OS disks. Add the following to the providerSpec field: providerSpec: value: ... osDisk: ... diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4 ... 1 2 3 These lines enable the use of Ephemeral OS disks. 4 Ephemeral OS disks are only supported for VMs or scale set instances that use the Standard LRS storage account type. Important The implementation of Ephemeral OS disk support in OpenShift Container Platform only supports the CacheDisk placement type. Do not change the placement configuration setting. Create a compute machine set using the updated configuration: USD oc create -f <machine-set-config>.yaml Verification On the Microsoft Azure portal, review the Overview page for a machine deployed by the compute machine set, and verify that the Ephemeral OS disk field is set to OS cache placement . 2.2.8. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. You can also create a persistent volume claim (PVC) that dynamically binds to a storage class backed by Azure ultra disks and mounts them to pods. Note Data disks do not support the ability to specify disk throughput or disk IOPS. You can configure these properties by using PVCs. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks using in-tree PVCs 2.2.8.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the worker data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with worker . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with worker . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with worker . Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with worker . Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 2.2.8.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 2.2.8.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 2.2.8.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 2.2.8.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 2.2.9. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.2.10. Configuring trusted launch for Azure virtual machines by using machine sets Important Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.16 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Note Some feature combinations result in an invalid configuration. Table 2.1. UEFI feature combination compatibility Secure Boot [1] vTPM [2] Valid configuration Enabled Enabled Yes Enabled Disabled Yes Enabled Omitted Yes Disabled Enabled Yes Omitted Enabled Yes Disabled Disabled No Omitted Disabled No Omitted Omitted No Using the secureBoot field. Using the virtualizedTrustedPlatformModule field. For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field to provide a valid configuration: Sample valid configuration with UEFI Secure Boot and vTPM enabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. 2 Specifies which UEFI security features to use. This section is required for all valid configurations. 3 Enables UEFI Secure Boot. 4 Enables the use of a vTPM. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. 2.2.11. Configuring Azure confidential virtual machines by using machine sets Important Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.16 supports Azure confidential virtual machines (VMs). Note Confidential VMs are currently not supported on 64-bit ARM architectures. By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ... 1 Specifies security profile settings for the managed disk when using a confidential VM. 2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. 3 Specifies security profile settings for the confidential VM. 4 Enables the use of confidential VMs. This value is required for all valid configurations. 5 Specifies which UEFI security features to use. This section is required for all valid configurations. 6 Disables UEFI Secure Boot. 7 Enables the use of a vTPM. 8 Specifies an instance type that supports confidential VMs. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. 2.2.12. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation. 2.2.12.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Although the minimum requirement for an Azure worker node is two vCPUs, Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. 2.2.13. Configuring Capacity Reservation by using machine sets OpenShift Container Platform version 4.16.3 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds. For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation . Note You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the machine set deployed. Prerequisites You have access to the cluster with cluster-admin privileges. You installed the OpenShift CLI ( oc ). You created a Capacity Reservation group. For more information, see the Microsoft Azure documentation Create a Capacity Reservation . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ... 1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. Verification To verify machine deployment, list the machines that the machine set created by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> where <machine_set_name> is the name of the compute machine set. In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation. 2.2.14. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Azure cloud provider. The following table lists the validated instance types: vmSize NVIDIA GPU accelerator Maximum number of GPUs Architecture Standard_NC24s_v3 V100 4 x86 Standard_NC4as_T4_v3 T4 1 x86 ND A100 v4 A100 8 x86 Note By default, Azure subscriptions do not have a quota for the Azure instance types with GPU. Customers have to request a quota increase for the Azure instance families listed above. Procedure View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m Make a copy of one of the existing compute MachineSet definitions and output the result to a YAML file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml View the content of the machineset: USD cat machineset-azure.yaml Example machineset-azure.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "0" machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T14:08:19Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: "23601" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 Make a copy of the machineset-azure.yaml file by running the following command: USD cp machineset-azure.yaml machineset-azure-gpu.yaml Update the following fields in machineset-azure-gpu.yaml : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name. Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.vmSize to Standard_NC4as_T4_v3 . Example machineset-azure-gpu.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "1" machine.openshift.io/memoryMb: "28672" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T20:27:12Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: "166285" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD diff machineset-azure.yaml machineset-azure-gpu.yaml Example output 14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3 Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml Example output machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Azure region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.29.4 myclustername-master-1 Ready control-plane,master 6h41m v1.29.4 myclustername-master-2 Ready control-plane,master 6h39m v1.29.4 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.29.4 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.29.4 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.29.4 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.29.4 View the list of compute machine sets: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml View the list of compute machine sets: oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Verification View the machine set you created by running the following command: USD oc get machineset -n openshift-machine-api | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m Note There is no need to specify a namespace for the node. The node definition is cluster scoped. 2.2.15. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. Additional resources Enabling Accelerated Networking during installation 2.2.15.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . steps To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . Additional resources Manually scaling a compute machine set 2.3. Creating a compute machine set on Azure Stack Hub You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure Stack Hub. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.3.1. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21 1 5 7 13 15 16 17 20 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 18 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID, node label, and region. 14 Specify the region to place machines on. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 12 Specify the availability set for the cluster. 2.3.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Create an availability set in which to deploy Azure Stack Hub compute machines. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <availabilitySet> , <clusterID> , and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.3.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.3.4. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure Stack Hub cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.3.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.4. Creating a compute machine set on GCP You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.4.1. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" , where <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a Sample GCP MachineSet values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 For <node> , specify the node label to add. 3 Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 2.4.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.4.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.4.4. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1 1 Specify the persistent disk type. Valid values are pd-ssd , pd-standard , and pd-balanced . The default value is pd-standard . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 2.4.5. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Note Confidential VMs are currently not supported on 64-bit ARM architectures. Important OpenShift Container Platform 4.16 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 2.4.6. Machine sets that deploy machines as preemptible VM instances You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine. Interruptions can occur when using preemptible VM instances for the following reasons: There is a system or maintenance event The supply of preemptible VM instances decreases The instance reaches the end of the allotted 24-hour period for preemptible VM instances When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a preemptible VM instance. 2.4.6.1. Creating preemptible VM instances by using compute machine sets You can launch a preemptible VM instance on GCP by adding preemptible to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: preemptible: true If preemptible is set to true , the machine is labelled as an interruptable-instance after the instance is launched. 2.4.7. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 2.4.8. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 2.4.9. Enabling GPU support for a compute machine set Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OpenShift Container Platform on GCP supports NVIDIA GPU models in the A2 and N1 machine series. Table 2.2. Supported GPU configurations Model name GPU type Machine types [1] NVIDIA A100 nvidia-tesla-a100 a2-highgpu-1g a2-highgpu-2g a2-highgpu-4g a2-highgpu-8g a2-megagpu-16g NVIDIA K80 nvidia-tesla-k80 n1-standard-1 n1-standard-2 n1-standard-4 n1-standard-8 n1-standard-16 n1-standard-32 n1-standard-64 n1-standard-96 n1-highmem-2 n1-highmem-4 n1-highmem-8 n1-highmem-16 n1-highmem-32 n1-highmem-64 n1-highmem-96 n1-highcpu-2 n1-highcpu-4 n1-highcpu-8 n1-highcpu-16 n1-highcpu-32 n1-highcpu-64 n1-highcpu-96 NVIDIA P100 nvidia-tesla-p100 NVIDIA P4 nvidia-tesla-p4 NVIDIA T4 nvidia-tesla-t4 NVIDIA V100 nvidia-tesla-v100 For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series , A2 machine series , and GPU regions and zones availability . You can define which supported GPU to use for an instance by using the Machine API. You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators. Note GPUs for graphics workloads are not supported. Procedure In a text editor, open the YAML file for an existing compute machine set or create a new one. Specify a GPU configuration under the providerSpec field in your compute machine set YAML file. See the following examples of valid configurations: Example configuration for the A2 machine series providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3 1 Specify the machine type. Ensure that the machine type is included in the A2 machine series. 2 When using GPU support, you must set onHostMaintenance to Terminate . 3 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . Example configuration for the N1 machine series providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5 1 Specify the number of GPUs to attach to the machine. 2 Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible. 3 Specify the machine type. Ensure that the machine type and GPU type are compatible. 4 When using GPU support, you must set onHostMaintenance to Terminate . 5 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . 2.4.10. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider. The following table lists the validated instance types: Instance type NVIDIA GPU accelerator Maximum number of GPUs Architecture a2-highgpu-1g A100 1 x86 n1-standard-4 T4 1 x86 Procedure Make a copy of an existing MachineSet . In the new copy, change the machine set name in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the instance type to add the following two lines to the newly copied MachineSet : Example a2-highgpu-1g.json file { "apiVersion": "machine.openshift.io/v1beta1", "kind": "MachineSet", "metadata": { "annotations": { "machine.openshift.io/GPU": "0", "machine.openshift.io/memoryMb": "16384", "machine.openshift.io/vCPU": "4" }, "creationTimestamp": "2023-01-13T17:11:02Z", "generation": 1, "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p" }, "name": "myclustername-2pt9p-worker-gpu-a", "namespace": "openshift-machine-api", "resourceVersion": "20185", "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd" }, "spec": { "replicas": 1, "selector": { "matchLabels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "template": { "metadata": { "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machine-role": "worker", "machine.openshift.io/cluster-api-machine-type": "worker", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "spec": { "lifecycleHooks": {}, "metadata": {}, "providerSpec": { "value": { "apiVersion": "machine.openshift.io/v1beta1", "canIPForward": false, "credentialsSecret": { "name": "gcp-cloud-credentials" }, "deletionProtection": false, "disks": [ { "autoDelete": true, "boot": true, "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64", "labels": null, "sizeGb": 128, "type": "pd-ssd" } ], "kind": "GCPMachineProviderSpec", "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", "metadata": { "creationTimestamp": null }, "networkInterfaces": [ { "network": "myclustername-2pt9p-network", "subnetwork": "myclustername-2pt9p-worker-subnet" } ], "preemptible": true, "projectID": "myteam", "region": "us-central1", "serviceAccounts": [ { "email": "[email protected]", "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] } ], "tags": [ "myclustername-2pt9p-worker" ], "userDataSecret": { "name": "worker-user-data" }, "zone": "us-central1-a" } } } } }, "status": { "availableReplicas": 1, "fullyLabeledReplicas": 1, "observedGeneration": 1, "readyReplicas": 1, "replicas": 1 } } View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.29.4 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.29.4 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.29.4 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.29.4 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.29.4 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.29.4 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.29.4 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file to make the following changes to the new MachineSet definition: Rename the machine set name by inserting the substring gpu in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the machineType of the new MachineSet definition to a2-highgpu-1g , which includes an NVIDIA A100 GPU. jq .spec.template.spec.providerSpec.value.machineType ocp_4.16_machineset-a2-highgpu-1g.json "a2-highgpu-1g" The <output_file.json> file is saved as ocp_4.16_machineset-a2-highgpu-1g.json . Update the following fields in ocp_4.16_machineset-a2-highgpu-1g.json : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.MachineType to a2-highgpu-1g . Add the following line under machineType : `"onHostMaintenance": "Terminate". For example: "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.16_machineset-a2-highgpu-1g.json - Example output 15c15 < "name": "myclustername-2pt9p-worker-gpu-a", --- > "name": "myclustername-2pt9p-worker-a", 25c25 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 34c34 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 59,60c59 < "machineType": "a2-highgpu-1g", < "onHostMaintenance": "Terminate", --- > "machineType": "n2-standard-4", Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f ocp_4.16_machineset-a2-highgpu-1g.json Example output machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m Note Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.4.11. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.5. Creating a compute machine set on IBM Cloud You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Cloud(R). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.5.1. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud(R) zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud(R) instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 2.5.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.5.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.6. Creating a compute machine set on IBM Power Virtual Server You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Power(R) Virtual Server. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.6.1. Sample YAML for a compute machine set custom resource on IBM Power Virtual Server This sample YAML file defines a compute machine set that runs in a specified IBM Power(R) Virtual Server zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: "0.5" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID within your region to place machines on. 2.6.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.6.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.7. Creating a compute machine set on Nutanix You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.7.1. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI ( oc ). Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and zone. 4 Annotations for the cluster autoscaler. 5 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.16. 6 Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 7 Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 8 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 9 Specify the amount of memory for the cluster in Gi. 10 Specify the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 11 Specify the size of the system disk in Gi. 12 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 13 Specify the number of vCPU sockets. 14 Specify the number of vCPUs per socket. 2.7.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.7.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.7.4. Failure domains for Nutanix clusters To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required: Modify the cluster infrastructure custom resource (CR). Modify the cluster control plane machine set CR. Modify or replace the compute machine set CRs. For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content. Additional resources Adding failure domains to an existing Nutanix cluster 2.8. Creating a compute machine set on OpenStack You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.8.1. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone> 1 5 7 13 15 16 17 18 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID and node label. 11 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 12 Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the primarySubnet value. 14 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 2.8.2. Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create compute machine sets that use that technology. This sample YAML defines a compute machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: "" In this sample, infrastructure_id is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role is the node label to add. The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports list. Note Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a compute machine set custom resource on RHOSP". An example compute machine set that uses SR-IOV networks apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 5 Enter a network UUID for each port. 2 6 Enter a subnet UUID for each port. 3 7 The value of the vnicType parameter must be direct for each port. 4 8 The value of the portSecurity parameter must be false for each port. You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it. Important After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter: USD oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true" Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. Additional resources Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack 2.8.3. Sample YAML for SR-IOV deployments where port security is disabled To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a compute machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports list. This difference from the standard SR-IOV compute machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces. Ports that you define for machines subnets require: Allowed address pairs for the API and ingress virtual IP ports The compute security group Attachment to the machines network and subnet Note Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP". An example compute machine set that uses SR-IOV networks and has port security disabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data 1 Specify allowed address pairs for the API and ingress ports. 2 3 Specify the machines network and subnet. 4 Specify the compute machines security group. Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. 2.8.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.8.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.9. Creating a compute machine set on vSphere You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.9.1. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 11 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 12 Specify the vCenter data center to deploy the compute machine set on. 13 Specify the vCenter datastore to deploy the compute machine set on. 14 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 15 Specify the vSphere resource pool for your VMs. 16 Specify the vCenter server IP or fully qualified domain name. 2.9.2. Minimum required vCenter privileges for compute machine set management To manage compute machine sets in an OpenShift Container Platform cluster on vCenter, you must use an account with privileges to read, create, and delete the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the minimum required privileges. The following table lists the minimum vCenter roles and privileges that are required to create, scale, and delete compute machine sets and to delete machines in your OpenShift Container Platform cluster. Example 2.1. Minimum vCenter roles and privileges required for compute machine set management vSphere object for role When required Required privileges vSphere vCenter Always InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update 1 StorageProfile.View 1 vSphere vCenter Cluster Always Resource.AssignVMToPool vSphere datastore Always Datastore.AllocateSpace Datastore.Browse vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter data center If the installation program creates the virtual machine folder Resource.AssignVMToPool VirtualMachine.Provisioning.DeployTemplate 1 The StorageProfile.Update and StorageProfile.View permissions are required only for storage backends that use the Container Storage Interface (CSI). The following table details the permissions and propagation settings that are required for compute machine set management. Example 2.2. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always Not required Listed required privileges vSphere vCenter data center Existing folder Not required ReadOnly permission Installation program creates the folder Required Listed required privileges vSphere vCenter Cluster Always Required Listed required privileges vSphere vCenter datastore Always Not required Listed required privileges vSphere Switch Always Not required ReadOnly permission vSphere Port Group Always Not required Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder Required Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. 2.9.3. Requirements for clusters with user-provisioned infrastructure to use compute machine sets To use compute machine sets on clusters that have user-provisioned infrastructure, you must ensure that you cluster configuration supports using the Machine API. Obtaining the infrastructure ID To create compute machine sets, you must be able to supply the infrastructure ID for your cluster. Procedure To obtain the infrastructure ID for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}' Satisfying vSphere credentials requirements To use compute machine sets, the Machine API must be able to interact with vCenter. Credentials that authorize the Machine API components to interact with vCenter must exist in a secret in the openshift-machine-api namespace. Procedure To determine whether the required credentials exist, run the following command: USD oc get secret \ -n openshift-machine-api vsphere-cloud-credentials \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output <vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user> where <vcenter-server> is the IP address or fully qualified domain name (FQDN) of the vCenter server and <openshift-user> and <openshift-user-password> are the OpenShift Container Platform administrator credentials to use. If the secret does not exist, create it by running the following command: USD oc create secret generic vsphere-cloud-credentials \ -n openshift-machine-api \ --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password> Satisfying Ignition configuration requirements Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the machine-config-server address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator. By default, this configuration is stored in the worker-user-data secret in the machine-api-operator namespace. Compute machine sets reference the secret during the machine creation process. Procedure To determine whether the required secret exists, run the following command: USD oc get secret \ -n openshift-machine-api worker-user-data \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output disableTemplating: false userData: 1 { "ignition": { ... }, ... } 1 The full output is omitted here, but should have this format. If the secret does not exist, create it by running the following command: USD oc create secret generic worker-user-data \ -n openshift-machine-api \ --from-file=<installation_directory>/worker.ign where <installation_directory> is the directory that was used to store your installation assets during cluster installation. Additional resources Understanding the Machine Config Operator Installing RHCOS and starting the OpenShift Container Platform bootstrap process 2.9.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Note Clusters that are installed with user-provisioned infrastructure have a different networking stack than clusters with infrastructure that is provisioned by the installation program. As a result of this difference, automatic load balancer management is unsupported on clusters that have user-provisioned infrastructure. For these clusters, a compute machine set can only create worker and infra type machines. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified. If your cluster uses user-provisioned infrastructure, you have satisfied the specific Machine API requirements for that configuration. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values: Example vSphere providerSpec values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... template: ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" numCPUs: 4 numCoresPerSocket: 4 snapshot: "" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4 1 The name of the secret in the openshift-machine-api namespace that contains the required vCenter credentials. 2 The name of the RHCOS VM template for your cluster that was created during installation. 3 The name of the secret in the openshift-machine-api namespace that contains the required Ignition configuration credentials. 4 The IP address or fully qualified domain name (FQDN) of the vCenter server. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.9.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.9.6. Adding tags to machines by using machine sets OpenShift Container Platform adds a cluster-specific tag to each virtual machine (VM) that it creates. The installation program uses these tags to select the VMs to delete when uninstalling a cluster. In addition to the cluster-specific tags assigned to VMs, you can configure a machine set to add up to 10 additional vSphere tags to the VMs it provisions. Prerequisites You have access to an OpenShift Container Platform cluster installed on vSphere using an account with cluster-admin permissions. You have access to the VMware vCenter console associated with your cluster. You have created a tag in the vCenter console. You have installed the OpenShift CLI ( oc ). Procedure Use the vCenter console to find the tag ID for any tag that you want to add to your machines: Log in to the vCenter console. From the Home menu, click Tags & Custom Attributes . Select a tag that you want to add to your machines. Use the browser URL for the tag that you select to identify the tag ID. Example tag URL https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions Example tag ID urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2 # ... 1 Specify a list of up to 10 tags to add to the machines that this machine set provisions. 2 Specify the value of the tag that you want to add to your machines. For example, urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL . 2.10. Creating a compute machine set on bare metal You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on bare metal. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.10.1. Sample YAML for a compute machine set custom resource on bare metal This sample YAML defines a compute machine set that runs on bare metal and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Edit the checksum URL to use the API VIP address. 11 Edit the url URL to use the API VIP address. 2.10.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.10.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition
|
[
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5",
"providerSpec: value: metadataServiceOptions: authentication: Required 1",
"providerSpec: placement: tenancy: dedicated",
"providerSpec: value: spotMarketOptions: {}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.29.4 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.29.4 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.29.4 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.29.4 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.29.4 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.29.4",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h",
"oc get machines -n openshift-machine-api | grep worker",
"preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h",
"oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>",
"jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json \"g4dn.xlarge\"",
"oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -",
"10c10 < \"name\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\", --- > \"name\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\", 21c21 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 31c31 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 60c60 < \"instanceType\": \"g4dn.xlarge\", --- > \"instanceType\": \"m5.xlarge\",",
"oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json",
"machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created",
"oc -n openshift-machine-api get machinesets | grep gpu",
"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s",
"oc -n openshift-machine-api get machines | grep gpu",
"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"providerSpec: value: spotVMOptions: {}",
"oc edit machineset <machine-set-name>",
"providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4",
"oc create -f <machine-set-config>.yaml",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2",
"\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"oc edit machineset <machine-set-name>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4",
"oc create -f <machine-set-name>.yaml",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m",
"oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml",
"cat machineset-azure.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"0\" machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T14:08:19Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"23601\" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1",
"cp machineset-azure.yaml machineset-azure-gpu.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"1\" machine.openshift.io/memoryMb: \"28672\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T20:27:12Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"166285\" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1",
"diff machineset-azure.yaml machineset-azure-gpu.yaml",
"14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3",
"oc create -f machineset-azure-gpu.yaml",
"machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.29.4 myclustername-master-1 Ready control-plane,master 6h41m v1.29.4 myclustername-master-2 Ready control-plane,master 6h39m v1.29.4 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.29.4 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.29.4 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.29.4 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.29.4",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h",
"oc create -f machineset-azure-gpu.yaml",
"get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h",
"oc get machineset -n openshift-machine-api | grep gpu",
"myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m",
"oc -n openshift-machine-api get machines | grep gpu",
"myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3",
"providerSpec: value: preemptible: true",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5",
"providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3",
"providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5",
"machineType: a2-highgpu-1g onHostMaintenance: Terminate",
"{ \"apiVersion\": \"machine.openshift.io/v1beta1\", \"kind\": \"MachineSet\", \"metadata\": { \"annotations\": { \"machine.openshift.io/GPU\": \"0\", \"machine.openshift.io/memoryMb\": \"16384\", \"machine.openshift.io/vCPU\": \"4\" }, \"creationTimestamp\": \"2023-01-13T17:11:02Z\", \"generation\": 1, \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\" }, \"name\": \"myclustername-2pt9p-worker-gpu-a\", \"namespace\": \"openshift-machine-api\", \"resourceVersion\": \"20185\", \"uid\": \"2daf4712-733e-4399-b4b4-d43cb1ed32bd\" }, \"spec\": { \"replicas\": 1, \"selector\": { \"matchLabels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"template\": { \"metadata\": { \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machine-role\": \"worker\", \"machine.openshift.io/cluster-api-machine-type\": \"worker\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"spec\": { \"lifecycleHooks\": {}, \"metadata\": {}, \"providerSpec\": { \"value\": { \"apiVersion\": \"machine.openshift.io/v1beta1\", \"canIPForward\": false, \"credentialsSecret\": { \"name\": \"gcp-cloud-credentials\" }, \"deletionProtection\": false, \"disks\": [ { \"autoDelete\": true, \"boot\": true, \"image\": \"projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64\", \"labels\": null, \"sizeGb\": 128, \"type\": \"pd-ssd\" } ], \"kind\": \"GCPMachineProviderSpec\", \"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\", \"metadata\": { \"creationTimestamp\": null }, \"networkInterfaces\": [ { \"network\": \"myclustername-2pt9p-network\", \"subnetwork\": \"myclustername-2pt9p-worker-subnet\" } ], \"preemptible\": true, \"projectID\": \"myteam\", \"region\": \"us-central1\", \"serviceAccounts\": [ { \"email\": \"[email protected]\", \"scopes\": [ \"https://www.googleapis.com/auth/cloud-platform\" ] } ], \"tags\": [ \"myclustername-2pt9p-worker\" ], \"userDataSecret\": { \"name\": \"worker-user-data\" }, \"zone\": \"us-central1-a\" } } } } }, \"status\": { \"availableReplicas\": 1, \"fullyLabeledReplicas\": 1, \"observedGeneration\": 1, \"readyReplicas\": 1, \"replicas\": 1 } }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.29.4 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.29.4 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.29.4 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.29.4 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.29.4 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.29.4 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.29.4",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h",
"oc get machines -n openshift-machine-api | grep worker",
"myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h",
"oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>",
"jq .spec.template.spec.providerSpec.value.machineType ocp_4.16_machineset-a2-highgpu-1g.json \"a2-highgpu-1g\"",
"\"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\",",
"oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.16_machineset-a2-highgpu-1g.json -",
"15c15 < \"name\": \"myclustername-2pt9p-worker-gpu-a\", --- > \"name\": \"myclustername-2pt9p-worker-a\", 25c25 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 34c34 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 59,60c59 < \"machineType\": \"a2-highgpu-1g\", < \"onHostMaintenance\": \"Terminate\", --- > \"machineType\": \"n2-standard-4\",",
"oc create -f ocp_4.16_machineset-a2-highgpu-1g.json",
"machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created",
"oc -n openshift-machine-api get machinesets | grep gpu",
"myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m",
"oc -n openshift-machine-api get machines | grep gpu",
"myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: \"0.5\" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'",
"oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>",
"oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>",
"oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"disableTemplating: false userData: 1 { \"ignition\": { }, }",
"oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions",
"urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_management/managing-compute-machines-with-the-machine-api
|
Chapter 3. Using the OpenShift Dedicated dashboard to get cluster information
|
Chapter 3. Using the OpenShift Dedicated dashboard to get cluster information The OpenShift Dedicated web console captures high-level information about the cluster. 3.1. About the OpenShift Dedicated dashboards page Access the OpenShift Dedicated dashboard, which captures high-level information about the cluster, by navigating to Home Overview from the OpenShift Dedicated web console. The OpenShift Dedicated dashboard provides various cluster information, captured in individual dashboard cards. The OpenShift Dedicated dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment) Status helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption, including information about: CPU time Memory allocation Storage consumed Network resources consumed Pod count Activity lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. 3.2. Recognizing resource and project limits and quotas You can view a graphical representation of available resources in the Topology view of the web console Developer perspective. If a resource has a message about resource limitations or quotas being reached, a yellow border appears around the resource name. Click the resource to open a side panel to see the message. If the Topology view has been zoomed out, a yellow dot indicates that a message is available. If you are using List View from the View Shortcuts menu, resources appear as a list. The Alerts column indicates if a message is available.
| null |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/web_console/using-dashboard-to-get-cluster-info
|
8.124. libvirt
|
8.124. libvirt 8.124.1. RHBA-2014:1374 - libvirt bug fix and enhancement update Updated libvirt packages that fix numerous bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fixes BZ# 1117177 Previously, the logic behind using the virsh command with the --config option, which handles the virtual domain configuration, was incorrect. Consequently, block devices were attached to both the domain configuration and the running domain. Both the handling logic and relevant technical documentation have been fixed, and virsh with --config now behaves correctly, attaching the block device to the domain configuration only. BZ# 999454 Prior to this update, the libvirt Python bindings for querying block job status could not distinguish between returning an error and no status available. As a consequence, the code that was polling for the completion of a block job had to deal with a Python exception, and could not distinguish it from an actual error. With this update, the bindings now successfully determine if there is no job and return an empty dictionary when that is the case. As a result, the bindings can be used more reliably when managing block jobs. BZ# 1078589 A update introduced an error where a SIG_SETMASK argument was incorrectly replaced by a SIG_BLOCK argument after the poll() system call. Consequently, the SIGCHLD signal could be permanently blocked, which caused signal masks not to return to their original values and defunct processes to be generated. With this update, the original signal masks are restored as intended, and poll() now functions correctly. BZ# 1066473 When hot unplugging a virtual CPU (vCPU) from a guest using libvirt, the current Red Hat Enterprise Linux QEMU implementation does not remove the corresponding vCPU thread. Consequently, libvirt did not detect the vCPU count correctly after a vCPU was hot unplugged, and it was not possible to hot plug a vCPU after a hot unplug. In this update, information from QEMU is used to filter out inactive vCPU threads of disabled vCPUs, which allows libvirt to perform the hot plug. BZ# 1076719 Prior to this update, the condition that checks whether QEMU successfully attached a new disk to a guest contained a typographical error. Due to the error, the libvirtd daemon terminated unexpectedly if the monitor command was unsuccessful: for example, when a virtual machine failed or when attaching a guest disk drive was interrupted. In this update, the error has been corrected, and libvirtd no longer crashes in the described circumstances. BZ# 1126393 The libvirt library has limits on Remote Procedure Call (RPC) messages to prevent Denial of Service (DoS) attacks. Previously, however, the domain XML file could fail this limit test when it was encoded into an RPC message and sent to the target machine during migration. As a consequence, the migration failed even though the domain XML format was valid. To fix this bug, the RPC message limits have been increased, and the migration now succeeds, while libvirt stays resistant to DoS attacks. BZ# 1113828 Due to a regression caused by a prior bug fix, attempting to perform a block copy while another block copy was already in progress could cause libvirt to reset the information about the block copy in progress. As a consequence, libvirt failed to recognize if the copied file format was raw, and performed a redundant format probe on the guest disk. This update fixes the regression and libvirt no longer performs incorrect format probes. BZ# 947974 The UUID (Universally Unique Identifier) is a string of characters which represents the virtual guest. Displaying the UUID on a screen requires correct APIs to present the strings in a user-readable format. Previously, printing unformatted UUID data caused exceptions or incorrectly formatted output. For Python scripts, exceptions that were not handled could cause unexpected failures. For other logging methods or visual displays, the characters in the output were jumbled. With this update, the UUID strings are properly formatted and printing them no longer causes unexpected exceptions or jumbled characters on output. BZ# 1011906 When receiving NUMA (Non-Uniform Memory Access) placement advice, the current memory was used for the amount parameter. As a consequence, domain placement was not as precise as it could have been if the current memory changed for the live domain. With this update, the advice is queried with the maximum memory as the amount parameter, and the advised placement now fixes the domain even when the current memory changes for the live domain. BZ# 807023 Previously, libvirt reported success of the device_del command even when the device was not successfully detached. With this update, libvirt always verifies whether device_del succeeded, and when the command fails, libvirt reports it accordingly. BZ# 977706 Prior to this update, using the virsh pool-refresh command incorrectly caused libvirt to remove a storage pool if a storage volume was removed while the command was being processed. As a consequence, the storage pool became inactive, even though the NFS directory was mounted. With this update, refreshing a storage pool no longer removes a volume from it. As a result, libvirt does not cause the storage pool to become inactive. Enhancements BZ# 1033984 A new pvpanic virtual device can now be attached to the virtualization stack and a guest panic can cause libvirt to send a notification event to management applications. BZ# 1100381 This update adds support for the following Broadwell microarchitecture processors' instructions: ADCX, ADOX, RDSEED, and PREFETCHW. This improves the overall performance of KVM . Users of libvirt are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. After installing the updated packages, libvirtd will be restarted automatically.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libvirt
|
Providing feedback on Red Hat build of OpenJDK documentation
|
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.442_release_notes/providing-direct-documentation-feedback_openjdk
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/proc_providing-feedback-on-red-hat-documentation_managing-users-groups-hosts
|
Chapter 13. Red Hat build of Keycloak admin client
|
Chapter 13. Red Hat build of Keycloak admin client The Red Hat build of Keycloak admin client is a Java library that facilitates the access and usage of the Red Hat build of Keycloak Admin REST API. The library requires Java 11 or higher at runtime (RESTEasy dependency enforces this requirement). To use it from your application add a dependency on the keycloak-admin-client library. For example using Maven: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency> The following example shows how to use the Java client library to get the details of the master realm: import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; ... Keycloak keycloak = Keycloak.getInstance( "http://localhost:8080", "master", "admin", "password", "admin-cli"); RealmRepresentation realm = keycloak.realm("master").toRepresentation(); Complete Javadoc for the admin client is available at API Documentation .
|
[
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency>",
"import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; Keycloak keycloak = Keycloak.getInstance( \"http://localhost:8080\", \"master\", \"admin\", \"password\", \"admin-cli\"); RealmRepresentation realm = keycloak.realm(\"master\").toRepresentation();"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/admin-client-
|
18.3.5. Automating the Installation with Kickstart
|
18.3.5. Automating the Installation with Kickstart You can allow an installation to run unattended by using Kickstart. A Kickstart file specifies settings for an installation. Once the installation system boots, it can read a Kickstart file and carry out the installation process without any further input from a user. On System z, this also requires a parameter file (optionally an additional configuration file under z/VM). This parameter file must contain the required network options described in Section 26.3, "Installation Network Parameters" and specify a kickstart file using the ks= option. The kickstart file typically resides on the network. The parameter file often also contains the options cmdline and RUNKS=1 to execute the loader without having to log in over the network with SSH (Refer to Section 26.6, "Parameters for Kickstart Installations" ). For further information and details on how to set up a kickstart file, refer to Section 32.3, "Creating the Kickstart File" . 18.3.5.1. Every Installation Produces a Kickstart File The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg . You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/installation_procedure_overview-s390-automating
|
Chapter 7. Troubleshooting CodeReady Workspaces
|
Chapter 7. Troubleshooting CodeReady Workspaces This section provides troubleshooting procedures for the most frequent issues a user can come in conflict with. Additional resources Section 7.1, "Viewing CodeReady Workspaces workspaces logs" Section 7.2, "Investigating failures at a workspace start using the Verbose mode" Section 7.3, "Troubleshooting slow workspaces" Section 7.4, "Troubleshooting network problems" 7.1. Viewing CodeReady Workspaces workspaces logs This section describes how to view CodeReady Workspaces workspaces logs. 7.1.1. Viewing logs from language servers and debug adapters 7.1.1.1. Checking important logs This section describes how to check important logs. Procedure In the OpenShift web console, click Applications Pods to see a list of all the active workspaces. Click on the name of the running Pod where the workspace is running. The Pod screen contains the list of all containers with additional information. Choose a container and click the container name. Note The most important logs are the theia-ide container and the plug-ins container logs. On the container screen, navigate to the Logs section. 7.1.1.2. Detecting memory problems This section describes how to detect memory problems related to a plug-in running out of memory. The following are the two most common problems related to a plug-in running out of memory: The plug-in container runs out of memory This can happen during plug-in initialization when the container does not have enough RAM to execute the entrypoint of the image. The user can detect this in the logs of the plug-in container. In this case, the logs contain OOMKilled , which implies that the processes in the container requested more memory than is available in the container. A process inside the container runs out of memory without the container noticing this For example, the Java language server (Eclipse JDT Language Server, started by the vscode-java extension) throws an OutOfMemoryException . This can happen any time after the container is initialized, for example, when a plug-in starts a language server or when a process runs out of memory because of the size of the project it has to handle. To detect this problem, check the logs of the primary process running in the container. For example, to check the log file of Eclipse JDT Language Server for details, see the relevant plug-in-specific sections. 7.1.1.3. Logging the client-server traffic for debug adapters This section describes how to log the exchange between Che-Theia and a debug adapter into the Output view. Prerequisites A debug session must be started for the Debug adapters option to appear in the list. Procedure Click File Settings and then open Preferences . Expand the Debug section in the Preferences view. Set the trace preference value to true (default is false ). All the communication events are logged. To watch these events, click View Output and select Debug adapters from the drop-down list at the upper right corner of the Output view. 7.1.1.4. Viewing logs for Python This section describes how to view logs for the Python language server. Procedure Navigate to the Output view and select Python in the drop-down list. 7.1.1.5. Viewing logs for Go This section describes how to view logs for the Go language server. 7.1.1.5.1. Finding the Go path This section describes how to find where the GOPATH variable points to. Procedure Execute the Go: Current GOPATH command. 7.1.1.5.2. Viewing the Debug Console log for Go This section describes how to view the log output from the Go debugger. Procedure Set the showLog attribute to true in the debug configuration. { "version": "0.2.0", "configurations": [ { "type": "go", "showLog": true .... } ] } To enable debugging output for a component, add the package to the comma-separated list value of the logOutput attribute: { "version": "0.2.0", "configurations": [ { "type": "go", "showLog": true, "logOutput": "debugger,rpc,gdbwire,lldbout,debuglineerr" .... } ] } The debug console prints the additional information in the debug console. 7.1.1.5.3. Viewing the Go logs output in the Output panel This section describes how to view the Go logs output in the Output panel. Procedure Navigate to the Output view. Select Go in the drop-down list. 7.1.1.6. Viewing logs for the NodeDebug NodeDebug2 adapter Note No specific diagnostics exist other than the general ones. 7.1.1.7. Viewing logs for Typescript 7.1.1.7.1. Enabling the label switched protocol (LSP) tracing Procedure To enable the tracing of messages sent to the Typescript (TS) server, in the Preferences view, set the typescript.tsserver.trace attribute to verbose . Use this to diagnose the TS server issues. To enable logging of the TS server to a file, set the typescript.tsserver.log attribute to verbose . Use this log to diagnose the TS server issues. The log contains the file paths. 7.1.1.7.2. Viewing the Typescript language server log This section describes how to view the Typescript language server log. Procedure To get the path to the log file, see the Typescript Output console: To open log file, use the Open TS Server log command. 7.1.1.7.3. Viewing the Typescript logs output in the Output panel This section describes how to view the Typescript logs output in the Output panel. Procedure Navigate to the Output view Select TypeScript in the drop-down list. 7.1.1.8. Viewing logs for Java Other than the general diagnostics, there are Language Support for Java (Eclipse JDT Language Server) plug-in actions that the user can perform. 7.1.1.8.1. Verifying the state of the Eclipse JDT Language Server Procedure Check if the container that is running the Eclipse JDT Language Server plug-in is running the Eclipse JDT Language Server main process. Open a terminal in the container that is running the Eclipse JDT Language Server plug-in (an example name for the container: vscode-javaxxx ). Inside the terminal, run the ps aux | grep jdt command to check if the Eclipse JDT Language Server process is running in the container. If the process is running, the output is: This message also shows the Visual Studio Code Java extension used. If it is not running, the language server has not been started inside the container. Check all logs described in Checking important logs 7.1.1.8.2. Verifying the Eclipse JDT Language Server features Procedure If the Eclipse JDT Language Server process is running, check if the language server features are working: Open a Java file and use the hover or autocomplete functionality. In case of an erroneous file, the user sees Java in the Outline view or in the Problems view. 7.1.1.8.3. Viewing the Java language server log Procedure The Eclipse JDT Language Server has its own workspace where it logs errors, information about executed commands, and events. To open this log file, open a terminal in the container that is running the Eclipse JDT Language Server plug-in. You can also view the log file by running the Java: Open Java Language Server log file command. Run cat <PATH_TO_LOG_FILE> where PATH_TO_LOG_FILE is /home/theia/.theia/workspace-storage/ <workspace_name> /redhat.java/jdt_ws/.metadata/.log . 7.1.1.8.4. Logging the Java language server protocol (LSP) messages Procedure To log the LSP messages to the Visual Studio Code Output view, enable tracing by setting the java.trace.server attribute to verbose . Additional resources For troubleshooting instructions, see the Visual Studio Code Java GitHub repository . 7.1.1.9. Viewing logs for Intelephense 7.1.1.9.1. Logging the Intelephense client-server communication Procedure To configure the PHP Intelephense language support to log the client-server communication in the Output view: Click File Settings . Open the Preferences view. Expand the Intelephense section and set the trace.server.verbose preference value to verbose to see all the communication events (the default value is off ). 7.1.1.9.2. Viewing Intelephense events in the Output panel This procedure describes how to view Intelephense events in the Output panel. Procedure Click View Output Select Intelephense in the drop-down list for the Output view. 7.1.1.10. Viewing logs for PHP-Debug This procedure describes how to configure the PHP Debug plug-in to log the PHP Debug plug-in diagnostic messages into the Debug Console view. Configure this before the start of the debug session. Procedure In the launch.json file, add the "log": true attribute to the php configuration. Start the debug session. The diagnostic messages are printed into the Debug Console view along with the application output. 7.1.1.11. Viewing logs for XML Other than the general diagnostics, there are XML plug-in specific actions that the user can perform. 7.1.1.11.1. Verifying the state of the XML language server Procedure Open a terminal in the container named vscode-xml- <xxx> . Run ps aux | grep java to verify that the XML language server has started. If the process is running, the output is: If is not, see the Checking important logs chapter. 7.1.1.11.2. Checking XML language server feature flags Procedure Check if the features are enabled. The XML plug-in provides multiple settings that can enable and disable features: xml.format.enabled : Enable the formatter xml.validation.enabled : Enable the validation xml.documentSymbols.enabled : Enable the document symbols To diagnose whether the XML language server is working, create a simple XML element, such as <hello></hello> , and confirm that it appears in the Outline panel on the right. If the document symbols do not show, ensure that the xml.documentSymbols.enabled attribute is set to true . If it is true , and there are no symbols, the language server may not be hooked to the editor. If there are document symbols, then the language server is connected to the editor. Ensure that the features that the user needs, are set to true in the settings (they are set to true by default). If any of the features are not working, or not working as expected, file an issue against the Language Server . 7.1.1.11.3. Enabling XML Language Server Protocol (LSP) tracing Procedure To log LSP messages to the Visual Studio Code Output view, enable tracing by setting the xml.trace.server attribute to verbose . 7.1.1.11.4. Viewing the XML language server log Procedure The log from the language server can be found in the plug-in sidecar at /home/theia/.theia/workspace-storage/<workspace_name>/redhat.vscode-xml/lsp4xml.log . 7.1.1.12. Viewing logs for YAML This section describes the YAML plug-in specific actions that the user can perform, in addition to the general diagnostics ones. 7.1.1.12.1. Verifying the state of the YAML language server This section describes how to verify the state of the YAML language server. Procedure Check if the container running the YAML plug-in is running the YAML language server. In the editor, open a terminal in the container that is running the YAML plug-in (an example name of the container: vscode-yaml- <xxx> ). In the terminal, run the ps aux | grep node command. This command searches all the node processes running in the current container. Verify that a command node **/server.js is running. The node **/server.js running in the container indicates that the language server is running. If it is not running, the language server has not started inside the container. In this case, see Checking important logs . 7.1.1.12.2. Checking the YAML language server feature flags Procedure To check the feature flags: Check if the features are enabled. The YAML plug-in provides multiple settings that can enable and disable features, such as: yaml.format.enable : Enables the formatter yaml.validate : Enables validation yaml.hover : Enables the hover function yaml.completion : Enables the completion function To check if the plug-in is working, type the simplest YAML, such as hello: world , and then open the Outline panel on the right side of the editor. Verify if there are any document symbols. If yes, the language server is connected to the editor. If any feature is not working, verify that the settings listed above are set to true (they are set to true by default). If a feature is not working, file an issue against the Language Server . 7.1.1.12.3. Enabling YAML Language Server Protocol (LSP) tracing Procedure To log LSP messages to the Visual Studio Code Output view, enable tracing by setting the yaml.trace.server attribute to verbose . 7.1.1.13. Viewing logs for .NET with OmniSharp-Theia plug-in 7.1.1.13.1. OmniSharp-Theia plug-in CodeReady Workspaces uses the OmniSharp-Theia plug-in as a remote plug-in. It is located at github.com/redhat-developer/omnisharp-theia-plugin . In case of an issue, report it, or contribute your fix in the repository. This plug-in registers omnisharp-roslyn as a language server and provides project dependencies and language syntax for C# applications. The language server runs on .NET SDK 2.2.105. 7.1.1.13.2. Verifying the state of the OmniSharp-Theia plug-in language server Procedure To check if the container running the OmniSharp-Theia plug-in is running OmniSharp, execute the ps aux | grep OmniSharp.exe command. If the process is running, the following is an example output: If the output is different, the language server has not started inside the container. Check the logs described in Checking important logs . 7.1.1.13.3. Checking OmniSharp Che-Theia plug-in language server features Procedure If the OmniSharp.exe process is running, check if the language server features are working by opening a .cs file and trying the hover or completion features, or opening the Problems or Outline view. 7.1.1.13.4. Viewing OmniSharp-Theia plug-in logs in the Output panel Procedure If OmniSharp.exe is running, it logs all information in the Output panel. To view the logs, open the Output view and select C# from the drop-down list. 7.1.1.14. Viewing logs for .NET with NetcoredebugOutput plug-in 7.1.1.14.1. NetcoredebugOutput plug-in The NetcoredebugOutput plug-in provides the netcoredbg tool. This tool implements the Visual Studio Code Debug Adapter protocol and allows users to debug .NET applications under the .NET Core runtime. The container where the NetcoredebugOutput plug-in is running contains .NET SDK v.2.2.105. 7.1.1.14.2. Verifying the state of the NetcoredebugOutput plug-in Procedure Search for a netcoredbg debug configuration in the launch.json file. Example 7.1. Sample debug configuration { "type": "netcoredbg", "request": "launch", "program": "USD{workspaceFolder}/bin/Debug/ <target-framework> / <project-name.dll> ", "args": [], "name": ".NET Core Launch (console)", "stopAtEntry": false, "console": "internalConsole" } Test the autocompletion feature within the braces of the configuration section of the launch.json file. If you can find netcoredbg , the Che-Theia plug-in is correctly initialized. If not, see Checking important logs . 7.1.1.14.3. Viewing NetcoredebugOutput plug-in logs in the Output panel This section describes how to view NetcoredebugOutput plug-in logs in the Output panel. Procedure Open the Debug console. 7.1.1.15. Viewing logs for Camel 7.1.1.15.1. Verifying the state of the Camel language server Procedure The user can inspect the log output of the sidecar container using the Camel language tools that are stored in the vscode-apache-camel <xxx> Camel container. To verify the state of the language server: Open a terminal inside the vscode-apache-camel <xxx> container. Run the ps aux | grep java command. The following is an example language server process: If you cannot find it, see Checking important logs . 7.1.1.15.2. Viewing Camel logs in the Output panel The Camel language server is a SpringBoot application that writes its log to the USD\{java.io.tmpdir}/log-camel-lsp.out file. Typically, USD\{java.io.tmpdir} points to the /tmp directory, so the filename is /tmp/log-camel-lsp.out . Procedure The Camel language server logs are printed in the Output channel named Language Support for Apache Camel . Note The output channel is created only at the first created log entry on the client side. It may be absent when everything is going well. 7.1.2. Viewing Che-Theia IDE logs This section describes how to view Che-Theia IDE logs. 7.1.2.1. Viewing Che-Theia editor logs using the OpenShift CLI Observing Che-Theia editor logs helps to get a better understanding and insight over the plug-ins loaded by the editor. This section describes how to access the Che-Theia editor logs using the OpenShift CLI (command-line interface). Prerequisites CodeReady Workspaces is deployed in an OpenShift cluster. A workspace is created. User is located in a CodeReady Workspaces installation project. Procedure Obtain the list of the available Pods: Example Obtain the list of the available containers in the particular Pod: USD oc get pods <name-of-pod> --output jsonpath='\{.spec.containers[*].name}' Example: Get logs from the theia/ide container: Example: 7.2. Investigating failures at a workspace start using the Verbose mode Verbose mode allows users to reach an enlarged log output, investigating failures at a workspace start. In addition to usual log entries, the Verbose mode also lists the container logs of each workspace. 7.2.1. Restarting a CodeReady Workspaces workspace in Verbose mode after start failure This section describes how to restart a CodeReady Workspaces workspace in the Verbose mode after a failure during the workspace start. Dashboard proposes the restart of a workspace in the Verbose mode once the workspace fails at its start. Prerequisites A running instance of CodeReady Workspaces. To install an instance of CodeReady Workspaces, see Installing CodeReady Workspaces . An existing workspace that fails to start. Procedure Using Dashboard, try to start a workspace. When it fails to start, click on the displayed Open in Verbose mode link. Check the Logs tab to find a reason for the workspace failure. 7.2.2. Starting a CodeReady Workspaces workspace in Verbose mode This section describes how to start the Red Hat CodeReady Workspaces workspace in Verbose mode. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che.adoc . An existing workspace defined on this instance of CodeReady Workspaces. Procedure Open the Workspaces tab. On the left side of a row dedicated to the workspace, access the drop-down menu displayed as three horizontal dots and select the Open in Verbose mode option. Alternatively, this option is also available in the workspace details, under the Actions drop-down menu. Check the Logs tab to find a reason for the workspace failure. 7.3. Troubleshooting slow workspaces Sometimes, workspaces can take a long time to start. Tuning can reduce this start time. Depending on the options, administrators or users can do the tuning. This section includes several tuning options for starting workspaces faster or improving workspace runtime performance. 7.3.1. Improving workspace start time Caching images with Image Puller Role: Administrator When starting a workspace, OpenShift pulls the images from the registry. A workspace can include many containers meaning that OpenShift pulls Pod's images (one per container). Depending on the size of the image and the bandwidth, it can take a long time. Image Puller is a tool that can cache images on each of OpenShift nodes. As such, pre-pulling images can improve start times. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#caching-images-for-faster-workspace-start.adoc . Choosing better storage type Role: Administrator and user Every workspace has a shared volume attached. This volume stores the project files, so that when restarting a workspace, changes are still available. Depending on the storage, attach time can take up to a few minutes, and I/O can be slow. To avoid this problem, use ephemeral or asynchronous storage. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#configuring-storage-types.adoc . Installing offline Role: Administrator Components of CodeReady Workspaces are OCI images. Set up Red Hat CodeReady Workspaces in offline mode (air-gap scenario) to reduce any extra download at runtime because everything needs to be available from the beginning. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che-in-a-restricted-environment.adoc . Optimizing workspace plug-ins Role: User When selecting various plug-ins, each plug-in can bring its own sidecar container, which is an OCI image. OpenShift pulls the images of these sidecar containers. Reduce the number of plug-ins, or disable them to see if start time is faster. See also https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#caching-images-for-faster-workspace-start.adoc . Reducing the number of public endpoints Role: Administrator For each endpoint, OpenShift is creating OpenShift Route objects. Depending on the underlying configuration, this creation can be slow. To avoid this problem, reduce the exposure. For example, to automatically detect a new port listening inside containers and redirect traffic for the processes using a local IP address ( 127.0.0.1 ), the Che-Theia IDE plug-in has three optional routes. By reducing the number of endpoints and checking endpoints of all plug-ins, workspace start can be faster. CDN configuration The IDE editor uses a CDN (Content Delivery Network) to serve content. Check that the content uses a CDN to the client (or a local route for offline setup). To check that, open Developer Tools in the browser and check for vendors in the Network tab. vendors.<random-id>.js or editor.main.* should come from CDN URLs. 7.3.2. Improving workspace runtime performance Providing enough CPU resources Plug-ins consume CPU resources. For example, when a plug-in provides IntelliSense features, adding more CPU resources may lead to better performance. Ensure the CPU settings in the devfile definition, devfile.yaml , are correct: apiVersion: 1.0.0 components: - type: chePlugin id: id/of/plug-in cpuLimit: 1360Mi 1 cpuRequest: 100m 2 1 Specifies the CPU limit for the plug-in. 2 Specifies the CPU request for the plug-in. Providing enough memory Plug-ins consume CPU and memory resources. For example, when a plug-in provides IntelliSense features, collecting data can consume all the memory allocated to the container. Providing more memory to the plug-in can increase performance. Ensure about the correctness of memory settings: in the plug-in definition - meta.yaml file in the devfile definition - devfile.yaml file apiVersion: v2 spec: containers: - image: "quay.io/my-image" name: "vscode-plugin" memoryLimit: "512Mi" 1 extensions: - https://link.to/vsix 1 Specifies the memory limit for the plug-in. In the devfile definition ( devfile.yaml ): apiVersion: 1.0.0 components: - type: chePlugin id: id/of/plug-in memoryLimit: 1048M 1 memoryRequest: 256M 1 Specifies the memory limit for this plug-in. Choosing better storage type Use ephemeral or asynchronous storage for faster I/O. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#configuring-storage-types.adoc . 7.4. Troubleshooting network problems This section describes how to prevent or resolve issues related to network policies. CodeReady Workspaces requires the availability of the WebSocket Secure (WSS) connections. Secure WebSocket connections improve confidentiality and also reliability because they reduce the risk of interference by bad proxies. Prerequisites The WebSocket Secure (WSS) connections on port 443 must be available on the network. Firewall and proxy may need additional configuration. Use a supported web browser: Google Chrome Mozilla Firefox Procedure Verify the browser supports the WebSocket protocol. See: Searching a websocket test . Verify firewalls settings: WebSocket Secure (WSS) connections on port 443 must be available. Verify proxy servers settings: The proxy transmits and intercepts WebSocket Secure (WSS) connections on port 443.
|
[
"{ \"version\": \"0.2.0\", \"configurations\": [ { \"type\": \"go\", \"showLog\": true . } ] }",
"{ \"version\": \"0.2.0\", \"configurations\": [ { \"type\": \"go\", \"showLog\": true, \"logOutput\": \"debugger,rpc,gdbwire,lldbout,debuglineerr\" . } ] }",
"usr/lib/jvm/default-jvm/bin/java --add-modules=ALL-SYSTEM --add-opens java.base/java.util",
"java ***/org.eclipse.ls4xml-uber.jar`",
"/tmp/theia-unpacked/redhat-developer.che-omnisharp-plugin.0.0.1.zcpaqpczwb.omnisharp_theia_plugin.theia/server/bin/mono /tmp/theia-unpacked/redhat-developer.che-omnisharp-plugin.0.0.1.zcpaqpczwb.omnisharp_theia_plugin.theia/server/omnisharp/OmniSharp.exe",
"{ \"type\": \"netcoredbg\", \"request\": \"launch\", \"program\": \"USD{workspaceFolder}/bin/Debug/ <target-framework> / <project-name.dll> \", \"args\": [], \"name\": \".NET Core Launch (console)\", \"stopAtEntry\": false, \"console\": \"internalConsole\" }",
"java -jar /tmp/vscode-unpacked/camel-tooling.vscode-apache-camel.latest.euqhbmepxd.camel-tooling.vscode-apache-camel-0.0.14.vsix/extension/jars/language-server.jar",
"oc get pods",
"oc get pods NAME READY STATUS RESTARTS AGE codeready-9-xz6g8 1/1 Running 1 15h workspace0zqb2ew3py4srthh.go-cli-549cdcf69-9n4w2 4/4 Running 0 1h",
"oc get pods <name-of-pod> --output jsonpath='\\{.spec.containers[*].name}'",
"oc get pods workspace0zqb2ew3py4srthh.go-cli-549cdcf69-9n4w2 -o jsonpath='\\{.spec.containers[*].name}' > go-cli che-machine-exechr7 theia-idexzb vscode-gox3r",
"oc logs --follow <name-of-pod> --container <name-of-container>",
"oc logs --follow workspace0zqb2ew3py4srthh.go-cli-549cdcf69-9n4w2 -container theia-idexzb >root INFO unzipping the plug-in 'task_plugin.theia' to directory: /tmp/theia-unpacked/task_plugin.theia root INFO unzipping the plug-in 'theia_yeoman_plugin.theia' to directory: /tmp/theia-unpacked/theia_yeoman_plugin.theia root WARN A handler with prefix term is already registered. root INFO [nsfw-watcher: 75] Started watching: /home/theia/.theia root WARN e.onStart is slow, took: 367.4600000013015 ms root INFO [nsfw-watcher: 75] Started watching: /projects root INFO [nsfw-watcher: 75] Started watching: /projects/.theia/tasks.json root INFO [4f9590c5-e1c5-40d1-b9f8-ec31ec3bdac5] Sync of 9 plugins took: 62.26000000242493 ms root INFO [nsfw-watcher: 75] Started watching: /projects root INFO [hosted-plugin: 88] PLUGIN_HOST(88) starting instance",
"apiVersion: 1.0.0 components: - type: chePlugin id: id/of/plug-in cpuLimit: 1360Mi 1 cpuRequest: 100m 2",
"apiVersion: v2 spec: containers: - image: \"quay.io/my-image\" name: \"vscode-plugin\" memoryLimit: \"512Mi\" 1 extensions: - https://link.to/vsix",
"apiVersion: 1.0.0 components: - type: chePlugin id: id/of/plug-in memoryLimit: 1048M 1 memoryRequest: 256M"
] |
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/end-user_guide/troubleshooting-codeready-workspaces_crw
|
8.183. rp-pppoe
|
8.183. rp-pppoe 8.183.1. RHBA-2013:0952 - rp-pppoe bug fix update Updated rp-pppoe packages that fix one bug are now available for Red Hat Enterprise Linux 6. The rp-pppoe packages provide the Roaring Penguin PPPoE (Point-to-Point Protocol over Ethernet) client, a user-mode program that does not require any kernel modifications. This client is fully compliant with RFC 2516, the official PPPoE specification. Bug Fix BZ# 841190 Previously, the pppoe-server service started by default at each system boot, which was not intended as pppoe-server is supposed to run only when enabled by an administrator. This update ensures that pppoe-server is not started by default, thus fixing this bug. Users of rp-pppoe are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rp-pppoe
|
3.12. RFKill
|
3.12. RFKill Many computer systems contain radio transmitters, including Wi-Fi, Bluetooth, and 3G devices. These devices consume power, which is wasted when the device is not in use. RFKill is a subsystem in the Linux kernel that provides an interface through which radio transmitters in a computer system can be queried, activated, and deactivated. When transmitters are deactivated, they can be placed in a state where software can reactive them (a soft block ) or where software cannot reactive them (a hard block ). The RFKill core provides the application programming interface (API) for the subsystem. Kernel drivers that have been designed to support RFkill use this API to register with the kernel, and include methods for enabling and disabling the device. Additionally, the RFKill core provides notifications that user applications can interpret and ways for user applications to query transmitter states. The RFKill interface is located at /dev/rfkill , which contains the current state of all radio transmitters on the system. Each device has its current RFKill state registered in sysfs . Additionally, RFKill issues uevents for each change of state in an RFKill-enabled device. Rfkill is a command-line tool with which you can query and change RFKill-enabled devices on the system. To obtain the tool, install the rfkill package. Use the command rfkill list to obtain a list of devices, each of which has an index number associated with it, starting at 0 . You can use this index number to tell rfkill to block or unblock a device, for example: blocks the first RFKill-enabled device on the system. You can also use rfkill to block certain categories of devices, or all RFKill-enabled devices. For example: blocks all Wi-Fi devices on the system. To block all RFKill-enabled devices, run: To unblock devices, run rfkill unblock instead of rfkill block . To obtain a full list of device categories that rfkill can block, run rfkill help
|
[
"~]# rfkill block 0",
"~]# rfkill block wifi",
"~]# rfkill block all"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/rfkill
|
Disaster Recovery Guide
|
Disaster Recovery Guide Red Hat Virtualization 4.3 Configure Red Hat Virtualization 4.3 for Disaster Recovery Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract Red Hat Virtualization can be configured to ensure that the environment remains operational even in the event of a catastrophe. This document provides information and instructions to configure Red Hat Virtualization environments for Disaster Recovery.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/disaster_recovery_guide/index
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/red_hat_enterprise_linux_system_roles_for_sap/conscious-language-message_rhel-system-roles-for-sap
|
7.163. polkit
|
7.163. polkit 7.163.1. RHBA-2015:0692 - polkit bug fix update Updated polkit packages that fix two bugs are now available for Red Hat Enterprise Linux 6. PolicyKit is a toolkit for defining and handling authorizations. It is used for allowing unprivileged processes to speak to privileged processes. Bug Fixes BZ# 1115649 Prior to this update, the polkitd daemon was not restarted after upgrading the polkit package, nor stopped after the package uninstallation. To fix this bug, scriptlets have been added to the polkit package. Upgrading the polkit package to the version shipped in this erratum does not yet restart the polkitd daemon. The daemon will be restarted after future upgrades from this version. BZ# 1130156 Previously, the output of "pkcheck --help" did not match the supported arguments and their expected form. This update removes the unimplemented "--list-temp" option from "pkcheck --help", and fixes other aspects of the text as well. Users of polkit are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-polkit
|
Installing on Alibaba
|
Installing on Alibaba OpenShift Container Platform 4.13 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team
|
[
"Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret",
"{ \"Version\": \"1\", \"Statement\": [ { \"Action\": [ \"tag:ListTagResources\", \"tag:UntagResources\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"vpc:DescribeVpcs\", \"vpc:DeleteVpc\", \"vpc:DescribeVSwitches\", \"vpc:DeleteVSwitch\", \"vpc:DescribeEipAddresses\", \"vpc:DescribeNatGateways\", \"vpc:ReleaseEipAddress\", \"vpc:DeleteNatGateway\", \"vpc:DescribeSnatTableEntries\", \"vpc:CreateSnatEntry\", \"vpc:AssociateEipAddress\", \"vpc:ListTagResources\", \"vpc:TagResources\", \"vpc:DescribeVSwitchAttributes\", \"vpc:CreateVSwitch\", \"vpc:CreateNatGateway\", \"vpc:DescribeRouteTableList\", \"vpc:CreateVpc\", \"vpc:AllocateEipAddress\", \"vpc:ListEnhanhcedNatGatewayAvailableZones\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ecs:ModifyInstanceAttribute\", \"ecs:DescribeSecurityGroups\", \"ecs:DeleteSecurityGroup\", \"ecs:DescribeSecurityGroupReferences\", \"ecs:DescribeSecurityGroupAttribute\", \"ecs:RevokeSecurityGroup\", \"ecs:DescribeInstances\", \"ecs:DeleteInstances\", \"ecs:DescribeNetworkInterfaces\", \"ecs:DescribeInstanceRamRole\", \"ecs:DescribeUserData\", \"ecs:DescribeDisks\", \"ecs:ListTagResources\", \"ecs:AuthorizeSecurityGroup\", \"ecs:RunInstances\", \"ecs:TagResources\", \"ecs:ModifySecurityGroupPolicy\", \"ecs:CreateSecurityGroup\", \"ecs:DescribeAvailableResource\", \"ecs:DescribeRegions\", \"ecs:AttachInstanceRamRole\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"pvtz:DescribeRegions\", \"pvtz:DescribeZones\", \"pvtz:DeleteZone\", \"pvtz:DeleteZoneRecord\", \"pvtz:BindZoneVpc\", \"pvtz:DescribeZoneRecords\", \"pvtz:AddZoneRecord\", \"pvtz:SetZoneRecordStatus\", \"pvtz:DescribeZoneInfo\", \"pvtz:DescribeSyncEcsHostTask\", \"pvtz:AddZone\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"slb:DescribeLoadBalancers\", \"slb:SetLoadBalancerDeleteProtection\", \"slb:DeleteLoadBalancer\", \"slb:SetLoadBalancerModificationProtection\", \"slb:DescribeLoadBalancerAttribute\", \"slb:AddBackendServers\", \"slb:DescribeLoadBalancerTCPListenerAttribute\", \"slb:SetLoadBalancerTCPListenerAttribute\", \"slb:StartLoadBalancerListener\", \"slb:CreateLoadBalancerTCPListener\", \"slb:ListTagResources\", \"slb:TagResources\", \"slb:CreateLoadBalancer\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ram:ListResourceGroups\", \"ram:DeleteResourceGroup\", \"ram:ListPolicyAttachments\", \"ram:DetachPolicy\", \"ram:GetResourceGroup\", \"ram:CreateResourceGroup\", \"ram:DeleteRole\", \"ram:GetPolicy\", \"ram:DeletePolicy\", \"ram:ListPoliciesForRole\", \"ram:CreateRole\", \"ram:AttachPolicyToRole\", \"ram:GetRole\", \"ram:CreatePolicy\", \"ram:CreateUser\", \"ram:DetachPolicyFromRole\", \"ram:CreatePolicyVersion\", \"ram:DetachPolicyFromUser\", \"ram:ListPoliciesForUser\", \"ram:AttachPolicyToUser\", \"ram:CreateUser\", \"ram:GetUser\", \"ram:DeleteUser\", \"ram:CreateAccessKey\", \"ram:ListAccessKeys\", \"ram:DeleteAccessKey\", \"ram:ListUsers\", \"ram:ListPolicyVersions\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"oss:DeleteBucket\", \"oss:DeleteBucketTagging\", \"oss:GetBucketTagging\", \"oss:GetBucketCors\", \"oss:GetBucketPolicy\", \"oss:GetBucketLifecycle\", \"oss:GetBucketReferer\", \"oss:GetBucketTransferAcceleration\", \"oss:GetBucketLog\", \"oss:GetBucketWebSite\", \"oss:GetBucketInfo\", \"oss:PutBucketTagging\", \"oss:PutBucket\", \"oss:OpenOssService\", \"oss:ListBuckets\", \"oss:GetService\", \"oss:PutBucketACL\", \"oss:GetBucketLogging\", \"oss:ListObjects\", \"oss:GetObject\", \"oss:PutObject\", \"oss:DeleteObject\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"alidns:DescribeDomainRecords\", \"alidns:DeleteDomainRecord\", \"alidns:DescribeDomains\", \"alidns:DescribeDomainRecordInfo\", \"alidns:AddDomainRecord\", \"alidns:SetDomainRecordStatus\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"bssapi:CreateInstance\", \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"ram:PassRole\", \"Resource\": \"*\", \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\": { \"acs:Service\": \"ecs.aliyuncs.com\" } } } ] }",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=alibabacloud --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4",
"ccoctl alibabacloud create-ram-users --name <name> --region=<alibaba_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --output-dir=<path_to_ccoctl_output_dir>",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=alibabacloud --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4",
"ccoctl alibabacloud create-ram-users --name <name> --region=<alibaba_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --output-dir=<path_to_ccoctl_output_dir>",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"<1> `credrequests` is the directory where the list of `CredentialsRequest` objects is stored. This command creates the directory if it does not exist.",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=alibabacloud --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4",
"ccoctl alibabacloud create-ram-users --name <name> --region=<alibaba_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --output-dir=<path_to_ccoctl_output_dir>",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/installing_on_alibaba/index
|
7.3. Colocation of Resources
|
7.3. Colocation of Resources A colocation constraint determines that the location of one resource depends on the location of another resource. There is an important side effect of creating a colocation constraint between two resources: it affects the order in which resources are assigned to a node. This is because you cannot place resource A relative to resource B unless you know where resource B is. So when you are creating colocation constraints, it is important to consider whether you should colocate resource A with resource B or resource B with resource A. Another thing to keep in mind when creating colocation constraints is that, assuming resource A is colocated with resource B, the cluster will also take into account resource A's preferences when deciding which node to choose for resource B. The following command creates a colocation constraint. For information on master and slave resources, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . Table 7.4, "Properties of a Colocation Constraint" . summarizes the properties and options for configuring colocation constraints. Table 7.4. Properties of a Colocation Constraint Field Description source_resource The colocation source. If the constraint cannot be satisfied, the cluster may decide not to allow the resource to run at all. target_resource The colocation target. The cluster will decide where to put this resource first and then decide where to put the source resource. score Positive values indicate the resource should run on the same node. Negative values indicate the resources should not run on the same node. A value of + INFINITY , the default value, indicates that the source_resource must run on the same node as the target_resource . A value of - INFINITY indicates that the source_resource must not run on the same node as the target_resource . 7.3.1. Mandatory Placement Mandatory placement occurs any time the constraint's score is +INFINITY or -INFINITY . In such cases, if the constraint cannot be satisfied, then the source_resource is not permitted to run. For score=INFINITY , this includes cases where the target_resource is not active. If you need myresource1 to always run on the same machine as myresource2 , you would add the following constraint: Because INFINITY was used, if myresource2 cannot run on any of the cluster nodes (for whatever reason) then myresource1 will not be allowed to run. Alternatively, you may want to configure the opposite, a cluster in which myresource1 cannot run on the same machine as myresource2 . In this case use score=-INFINITY Again, by specifying -INFINITY , the constraint is binding. So if the only place left to run is where myresource2 already is, then myresource1 may not run anywhere. 7.3.2. Advisory Placement If mandatory placement is about "must" and "must not", then advisory placement is the "I would prefer if" alternative. For constraints with scores greater than -INFINITY and less than INFINITY , the cluster will try to accommodate your wishes but may ignore them if the alternative is to stop some of the cluster resources. Advisory colocation constraints can combine with other elements of the configuration to behave as if they were mandatory. 7.3.3. Colocating Sets of Resources If your configuration requires that you create a set of resources that is colocated and started in order, you can configure a resource group that contains those resources, as described in Section 6.5, "Resource Groups" . There are some situations, however, where configuring the resources that need to be colocated as a resource group is not appropriate: You may need to colocate a set of resources but the resources do not necessarily need to start in order. You may have a resource C that must be colocated with either resource A or B has started but there is no relationship between A and B. You may have resources C and D that must be colocated with both resources A and B, but there is no relationship between A and B or between C and D. In these situations, you can create a colocation constraint on a set or sets of resources with the pcs constraint colocation set command. You can set the following options for a set of resources with the pcs constraint colocation set command. sequential , which can be set to true or false to indicate whether the members of the set must be colocated with each other. Setting sequential to false allows the members of this set to be colocated with another set listed later in the constraint, regardless of which members of this set are active. Therefore, this option makes sense only if another set is listed after this one in the constraint; otherwise, the constraint has no effect. role , which can be set to Stopped , Started , Master , or Slave . For information on multistate resources, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . You can set the following constraint options for a set of resources following the setoptions parameter of the pcs constraint colocation set command. kind , to indicate how to enforce the constraint. For information on this option, see Table 7.3, "Properties of an Order Constraint" . symmetrical , to indicate the order in which to stop the resources. If true, which is the default, stop the resources in the reverse order. Default value: true id , to provide a name for the constraint you are defining. When listing members of a set, each member is colocated with the one before it. For example, "set A B" means "B is colocated with A". However, when listing multiple sets, each set is colocated with the one after it. For example, "set C D sequential=false set A B" means "set C D (where C and D have no relation between each other) is colocated with set A B (where B is colocated with A)". The following command creates a colocation constraint on a set or sets of resources. 7.3.4. Removing Colocation Constraints Use the following command to remove colocation constraints with source_resource .
|
[
"pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]",
"pcs constraint colocation add myresource1 with myresource2 score=INFINITY",
"pcs constraint colocation add myresource1 with myresource2 score=-INFINITY",
"pcs constraint colocation set resource1 resource2 [ resourceN ]... [ options ] [set resourceX resourceY ... [ options ]] [setoptions [ constraint_options ]]",
"pcs constraint colocation remove source_resource target_resource"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-colocationconstraints-HAAR
|
Chapter 2. Differences from upstream OpenJDK 11
|
Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.9/rn-openjdk-diff-from-upstream
|
Chapter 11. Planning for Installation on Power Systems Servers
|
Chapter 11. Planning for Installation on Power Systems Servers 11.1. Upgrade or Install? While automated in-place upgrades are now supported, the support is currently limited to AMD64 and Intel 64 systems. If you have an existing installation of Red Hat Enterprise Linux on an IBM Power Systems server, you must perform a clean install to migrate to Red Hat Enterprise Linux 7. A clean install is performed by backing up all data from the system, formatting disk partitions, performing an installation of Red Hat Enterprise Linux 7 from installation media, and then restoring any user data.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-steps-ppc
|
Deploying RHEL 8 on Google Cloud Platform
|
Deploying RHEL 8 on Google Cloud Platform Red Hat Enterprise Linux 8 Obtaining RHEL system images and creating RHEL instances on GCP Red Hat Customer Content Services
|
[
"provider = \"gcp\" [settings] bucket = \"GCP_BUCKET\" region = \"GCP_STORAGE_REGION\" object = \"OBJECT_KEY\" credentials = \"GCP_CREDENTIALS\"",
"sudo base64 -w 0 cee-gcp-nasa-476a1fa485b7.json",
"sudo composer-cli compose start BLUEPRINT-NAME gce IMAGE_KEY gcp-config.toml",
"sudo composer-cli compose status",
"base64 -w 0 \"USD{GOOGLE_APPLICATION_CREDENTIALS}\"",
"provider = \"gcp\" [settings] provider = \"gcp\" [settings] credentials = \"GCP_CREDENTIALS\"",
"[gcp] credentials = \" PATH_TO_GCP_ACCOUNT_CREDENTIALS \"",
"virt-install --name kvmtest --memory 2048 --vcpus 2 --cdrom /home/username/Downloads/rhel8.iso,bus=virtio --os-variant=rhel8.0",
"subscription-manager register --auto-attach",
"yum install cloud-init systemctl enable --now cloud-init.service",
"gcloud projects create my-gcp-project3 --name project3",
"ssh-keygen -t rsa -f ~/.ssh/google_compute_engine",
"ssh -i ~/.ssh/google_compute_engine <username> @ <instance_external_ip>",
"gcloud auth login",
"gsutil mb gs://bucket_name",
"qemu-img convert -f qcow2 -O raw rhel-8.0-sample.qcow2 disk.raw",
"tar --format=oldgnu -Sczf disk.raw.tar.gz disk.raw",
"gsutil cp disk.raw.tar.gz gs://bucket_name",
"gcloud compute images create my-image-name --source-uri gs://my-bucket-name/disk.raw.tar.gz",
"gcloud compute instances create myinstance3 --zone=us-central1-a --image test-iso2-image",
"gcloud compute instances list",
"ssh -i ~/.ssh/google_compute_engine <user_name>@<instance_external_ip>",
"subscription-manager register --auto-attach",
"insights-client register --display-name <display-name-value>",
"gcloud auth login",
"gsutil mb gs:// BucketName",
"gsutil mb gs://rhel-ha-bucket",
"qemu-img convert -f qcow2 ImageName .qcow2 -O raw disk.raw",
"tar -Sczf ImageName .tar.gz disk.raw",
"gsutil cp ImageName .tar.gz gs:// BucketName",
"gcloud compute images create BaseImageName --source-uri gs:// BucketName / BaseImageName .tar.gz",
"[admin@localhost ~] USD gcloud compute images create rhel-76-server --source-uri gs://user-rhelha/rhel-server-76.tar.gz Created [https://www.googleapis.com/compute/v1/projects/MyProject/global/images/rhel-server-76]. NAME PROJECT FAMILY DEPRECATED STATUS rhel-76-server rhel-ha-testing-on-gcp READY",
"gcloud compute instances create BaseInstanceName --can-ip-forward --machine-type n1-standard-2 --image BaseImageName --service-account ServiceAccountEmail",
"[admin@localhost ~] USD gcloud compute instances create rhel-76-server-base-instance --can-ip-forward --machine-type n1-standard-2 --image rhel-76-server --service-account [email protected] Created [https://www.googleapis.com/compute/v1/projects/rhel-ha-testing-on-gcp/zones/us-east1-b/instances/rhel-76-server-base-instance]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel-76-server-base-instance us-east1-bn1-standard-2 10.10.10.3 192.227.54.211 RUNNING",
"ssh root@PublicIPaddress",
"subscription-manager repos --disable= *",
"subscription-manager repos --enable=rhel-8-server-rpms",
"yum update -y",
"metadata.google.internal iburst Google NTP server",
"rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules",
"chkconfig network on",
"systemctl enable sshd systemctl is-enabled sshd",
"ln -sf /usr/share/zoneinfo/UTC /etc/localtime",
"Server times out connections after several minutes of inactivity. Keep alive ssh connections by sending a packet every 7 minutes. ServerAliveInterval 420",
"PermitRootLogin no PasswordAuthentication no AllowTcpForwarding yes X11Forwarding no PermitTunnel no Compute times out connections after 10 minutes of inactivity. Keep ssh connections alive by sending a packet every 7 minutes. ClientAliveInterval 420",
"ssh_pwauth from 1 to 0. ssh_pwauth: 0",
"subscription-manager unregister",
"export HISTSIZE=0",
"sync",
"gcloud compute disks snapshot InstanceName --snapshot-names SnapshotName",
"gcloud compute images create ConfiguredImageFromSnapshot --source-snapshot SnapshotName",
"gcloud compute instance-templates create InstanceTemplateName --can-ip-forward --machine-type n1-standard-2 --image ConfiguredImageFromSnapshot --service-account ServiceAccountEmailAddress",
"[admin@localhost ~] USD gcloud compute instance-templates create rhel-81-instance-template --can-ip-forward --machine-type n1-standard-2 --image rhel-81-gcp-image --service-account [email protected] Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/global/instanceTemplates/rhel-81-instance-template]. NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP rhel-81-instance-template n1-standard-2 2018-07-25T11:09:30.506-07:00",
"gcloud compute instances create NodeName01 NodeName02 --source-instance-template InstanceTemplateName --zone RegionZone --network= NetworkName --subnet= SubnetName",
"[admin@localhost ~] USD gcloud compute instances create rhel81-node-01 rhel81-node-02 rhel81-node-03 --source-instance-template rhel-81-instance-template --zone us-west1-b --network=projectVPC --subnet=range0 Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-01]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-02]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-03]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel81-node-01 us-west1-b n1-standard-2 10.10.10.4 192.230.25.81 RUNNING rhel81-node-02 us-west1-b n1-standard-2 10.10.10.5 192.230.81.253 RUNNING rhel81-node-03 us-east1-b n1-standard-2 10.10.10.6 192.230.102.15 RUNNING",
"subscription-manager repos --disable= *",
"subscription-manager repos --enable=rhel-8-server-rpms subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms",
"yum install -y pcs pacemaker fence-agents-gce resource-agents-gcp",
"yum update -y",
"passwd hacluster",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload",
"systemctl start pcsd.service systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.",
"systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-06-25 19:21:42 UTC; 15s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5901 (pcsd) CGroup: /system.slice/pcsd.service └─5901 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &",
"pcs host auth hostname1 hostname2 hostname3 Username: hacluster Password: hostname1 : Authorized hostname2 : Authorized hostname3 : Authorized",
"pcs cluster setup cluster-name hostname1 hostname2 hostname3",
"pcs cluster enable --all",
"pcs cluster start --all",
"fence_gce --zone us-west1-b --project=rhel-ha-on-gcp -o list",
"fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list 4435801234567893181,InstanceName-3 4081901234567896811,InstanceName-1 7173601234567893341,InstanceName-2",
"pcs stonith create FenceDeviceName fence_gce zone= Region-Zone project= MyProject",
"pcs status",
"pcs status Cluster name: gcp-cluster Stack: corosync Current DC: rhel81-node-02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 27 12:53:25 2018 Last change: Fri Jul 27 12:51:43 2018 by root via cibadmin on rhel81-node-01 3 nodes configured 3 resources configured Online: [ rhel81-node-01 rhel81-node-02 rhel81-node-03 ] Full list of resources: us-west1-b-fence (stonith:fence_gce): Started rhel81-node-01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled",
"gcloud-ra init",
"pcs resource describe gcp-vpc-move-vip",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip= UnusedIPaddress/CIDRblock",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.10.200/32",
"pcs resource create vip IPaddr2 nic= interface ip= AliasIPaddress cidr_netmask=32",
"pcs resource create vip IPaddr2 nic=eth0 ip=10.10.10.200 cidr_netmask=32",
"pcs resource group add vipgrp aliasip vip",
"pcs status",
"pcs resource move vip Node",
"pcs resource move vip rhel81-node-03",
"pcs status",
"gcloud-ra compute networks subnets update SubnetName --region RegionName --add-secondary-ranges SecondarySubnetName = SecondarySubnetRange",
"gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip= UnusedIPaddress/CIDRblock",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.20.200/32",
"pcs resource create vip IPaddr2 nic= interface ip= AliasIPaddress cidr_netmask=32",
"pcs resource create vip IPaddr2 nic=eth0 ip=10.10.20.200 cidr_netmask=32",
"pcs resource group add vipgrp aliasip vip",
"pcs status",
"pcs resource move vip Node",
"pcs resource move vip rhel81-node-03",
"pcs status"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/deploying_rhel_8_on_google_cloud_platform/console.redhat.com
|
Chapter 4. Spring Security with Red Hat Process Automation Manager
|
Chapter 4. Spring Security with Red Hat Process Automation Manager Spring Security is provided by a collection of servlet filters that make up the Spring Security library . These filters provide authentication through user names and passwords and authorization through roles. The default Spring Security implementation generated in a Red Hat Process Automation Manager Spring Boot application provides authorization without authentication. This means that anyone with a user name and password valid for the application can access the application without a role. The servlet filters protect your Spring Boot application against common exploits such as cross-site request forgery (CSRF) and cross-origin resource sharing (CORS). Spring Web relies on the DispatcherServlet to redirect incoming HTTP requests to your underlying java REST resources annotated with the @Controller annotation. The DispatchServlet is agnostic of elements such as security. It is good practice and more efficient to handle implementation details such a security outside of the business application logic. Therefore, Spring uses filters to intercept HTTP requests before routing them to the DispatchServlet . A typical Spring Security implementation consists of the following steps that use multiple servlet filters: Extract and decode or decrypt user credentials from the HTTP request. Complete authentication by validating the credentials against the corporate identity provider, for example a database, a web service, or Red Hat Single Sign-On. Complete authorization by determining whether the authorized user has access rights to perform the request. If the user is authenticated and authorized, propagate the request to the DispatchServlet . Spring breaks these steps down into individual filters and chains them together in a FilterChain. This chaining method provides the flexibility required to work with almost any identity provider and security framework. With Spring Security, you can define a FilterChain for your application programmatically. The following section is from the business-application-service/src/main/java/com/company/service/DefaultWebSecurityConfig.java file from a Spring Boot business application service file created using the Maven archetype command. For information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . @Configuration("kieServerSecurity") @EnableWebSecurity public class DefaultWebSecurityConfig extends WebSecurityConfigurerAdapter { @Override (1) protected void configure(HttpSecurity http) throws Exception { http .cors().and() .csrf().disable() (2) .authorizeRequests() (3) .antMatchers("/rest/*").authenticated().and() .httpBasic().and() (4) .headers().frameOptions().disable(); (5) } (1) Overrides the default configure(HttpSecurity http) method and defines a custom FilterChain using the Spring HttpClient fluent API/DSL (2) Disables common exploit filters for CORS and CSRF tokens for local testing (3) Requires authentication for any requests made to the pattern 'rest/*' but no roles are defined (4) Allows basic authentication through the authorization header, for example header 'Authorization: Basic dGVzdF91c2VyOnBhc3N3b3Jk' (5) Removes the 'X-Frame-Options' header from request/response This configuration allows any authenticated user to execute the KIE API. Because the default implementation is not integrated into any external identity provider, users are defined in memory, in the same DefaultWebSecurityConfg class. The following section shows the users that are provided when you create a Red Hat Process Automation Manager Spring Boot business application: @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication().withUser("user").password("user").roles("kie-server"); auth.inMemoryAuthentication().withUser("wbadmin").password("wbadmin").roles("admin"); auth.inMemoryAuthentication().withUser("kieserver").password("kieserver1!").roles("kie-server"); } 4.1. Using Spring Security to authenticate with authorization By default, anyone with a user name and password valid for the Red Hat Process Automation Manager Spring Boot application can access the application without requiring a role. Spring Security authentication and authorization are derived from the HTTPSecurity filter chain configuration. To protect the REST API from users that do not have a specific role mapping, use the Spring Security .authorizeRequests() method to match the URLs that you want to authorize. Prerequisites You have a Red Hat Process Automation Manager Spring Boot application. Procedure In the directory that contains your Red Hat Process Automation Manager Spring Boot application, open the business-application-service/src/main/java/com/company/service/DefaultWebSecurityConfig.java file in a text editor or IDE. To authorize requests for access by an authenticated user only if they have a specific role, edit the .antMatchers("/rest/*").authenticated().and() line in one of the following ways: To authorize for a single role, edit the antMatchers method as shown in the following example, where <role> is the role that that the user must have for access: @Configuration("kieServerSecurity") @EnableWebSecurity public class DefaultWebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .cors().and().csrf().disable() .authorizeRequests() .antMatchers("/**").hasRole("<role>") .anyRequest().authenticated() .and().httpBasic() .and().headers().frameOptions().disable(); } ... To authorize a user that has one of a range of roles, edit the antMatchers method as shown in the following example, where <role> and <role1> are each roles the user can have for access: @Configuration("kieServerSecurity") @EnableWebSecurity public class DefaultWebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .cors().and().csrf().disable() .authorizeRequests() .antMatchers("/**").hasAnyRole("<role>", "<role1") .anyRequest().authenticated() .and().httpBasic() .and().headers().frameOptions().disable(); } ... The authorizeRequests method requires authorization of requests for a specific expression. All requests must be successfully authenticated. Authentication is performed using HTTP basic authentication. If an authenticated user tries to access a resource that is protected for a role that they do not have, the user receives an HTTP 403 (Forbidden) error. 4.2. Disabling Spring Security in a Red Hat Process Automation Manager business application You can configure Spring Security in a Red Hat Process Automation Manager business application to provide the security context without authentication. Prerequisites You have a Red Hat Process Automation Manager Spring Boot application. Procedure In the directory that contains your Red Hat Process Automation Manager Spring Boot application, open the business-application-service/src/main/java/com/company/service/DefaultWebSecurityConfig.java file in a text editor or integrated development environment (IDE). Edit the .antMatchers method as shown in the following example: @Override protected void configure(HttpSecurity http) throws Exception { http .cors().and().csrf().disable() .authorizeRequests() .antMatchers("/*") .permitAll() .and().headers().frameOptions().disable(); } The PermitAll method allows any and all requests for the specified URL pattern. Note Because no security context is passed in the HttpServletRequest , Spring creates an AnonymousAuthenticationToken and populates the SecurityContext with the anonymousUser user with no designated roles other than the ROLE_ANONYMOUS role. The user will not have access to many of the features of the application, for example they will be unable to assign actions to group assigned tasks. 4.3. Using Spring Security with preauthenication If you disable Spring Security authentication by using the PermitAll method, any user can log in to the application, but users will have limited access and functionality. However, you can preauthenticate a user, for example a designated service account, so a group of users can use the same login but have all of the permissions that they require. That way, you do not need to create credentials for each user. The easiest way to implement preauthentication is to create a custom filter servlet and add it before the security FilterChain in the DefaultWebSecurityConfig class. This way, you can inject a customized, profile-based security context, control its contents, and keep it simple. Prerequisites You have a Red Hat Process Automation Manager Spring Boot application and you have disabled Spring Security as Section 4.2, "Disabling Spring Security in a Red Hat Process Automation Manager business application" . Procedure Create the following class that extends the AnonymousAuthenticationFilter class: import org.springframework.security.authentication.AnonymousAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core.AuthenticationException; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.security.web.authentication.AnonymousAuthenticationFilter; import javax.servlet.FilterChain; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import java.io.IOException; import java.util.Arrays; import java.util.Collections; import java.util.List; public class <CLASS_NAME> extends AnonymousAuthenticationFilter { private static final Logger log = LoggerFactory.getLogger(<CLASS_NAME>.class); public AnonymousAuthFilter() { super("PROXY_AUTH_FILTER"); } @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { SecurityContextHolder.getContext().setAuthentication(createAuthentication((HttpServletRequest) req)); log.info("SecurityContextHolder pre-auth user: {}", SecurityContextHolder.getContext()); if (log.isDebugEnabled()) { log.debug("Populated SecurityContextHolder with authenticated user: {}", SecurityContextHolder.getContext().getAuthentication()); } chain.doFilter(req, res); } @Override protected Authentication createAuthentication(final HttpServletRequest request) throws AuthenticationException { log.info("<ANONYMOUS_USER>"); List<? extends GrantedAuthority> authorities = Collections .unmodifiableList(Arrays.asList(new SimpleGrantedAuthority("<ROLE>") )); return new AnonymousAuthenticationToken("ANONYMOUS", "<ANONYMOUS_USER>", authorities); } } Replace the following variables: Replace <CLASS_NAME> with a name for this class, for example AnonymousAuthFilter . Replace <ANONYMOUS_USER> with a user ID, for example Service_Group . Replace <ROLE> with the role that has the privileges that you want to give to <ANONYMOUS_USER> . If you want to give <ANONYMOUS_USER> more than one role, add additional roles as shown in the following example: .unmodifiableList(Arrays.asList(new SimpleGrantedAuthority("<ROLE>") , new SimpleGrantedAuthority("<ROLE2>") Add .anonymous().authenticationFilter(new <CLASS_NAME>()).and() to the business-application-service/src/main/java/com/company/service/DefaultWebSecurityConfig.java file, where <CLASS_NAME> is the name of the class that you created: @Override protected void configure(HttpSecurity http) throws Exception { http .anonymous().authenticationFilter(new <CLASS_NAME>()).and() // Override anonymousUser .cors().and().csrf().disable() .authorizeRequests() .antMatchers("/*").permitAll() .and().headers().frameOptions().disable(); } 4.4. Configuring the business application with Red Hat Single Sign-On Most organizations provide user and group details through single sign-on (SSO) tokens. You can use Red Hat Single Sign-On (RHSSO) to enable single sign-on between your services and to have a central place to configure and manage your users and roles. Prerequisites You have a Spring Boot business application. Procedure Download and install RHSSO. For instructions, see the Red Hat Single Sign-On Getting Started Guide . Configure RHSSO: Either use the default master realm or create a new realm. A realm manages a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control. Create the springboot-app client and set the AccessType to public. Set a valid redirect URI and web origin according to your local setup, as shown in the following example: Valid redirect URIs: http://localhost:8090/* Web origin: http://localhost:8090 Create realm roles that are used in the application. Create users that are used in the application and assign roles to them. Add the following element and property to the Spring Boot project pom.xml file, where <KEYCLOAK_VERSION> is the version of Keycloak that you are using: <properties> <version.org.keycloak><KEYCLOAK_VERSION></version.org.keycloak> </properties> Add the following dependencies to the Spring Boot project pom.xml file: <dependencyManagement> <dependencies> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>USD{version.org.keycloak}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> .... <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-spring-boot-starter</artifactId> </dependency> In your Spring Boot project directory, open the business-application-service/src/main/resources/application.properties file and add the following lines: Modify the business-application-service/src/main/java/com/company/service/DefaultWebSecurityConfig.java file to ensure that Spring Security works correctly with RHSSO: import org.keycloak.adapters.KeycloakConfigResolver; import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver; import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider; import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper; import org.springframework.security.core.session.SessionRegistryImpl; import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy; import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy; @Configuration("kieServerSecurity") @EnableWebSecurity public class DefaultWebSecurityConfig extends KeycloakWebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { super.configure(http); http .csrf().disable() .authorizeRequests() .anyRequest().authenticated() .and() .httpBasic(); } @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider(); SimpleAuthorityMapper mapper = new SimpleAuthorityMapper(); mapper.setPrefix(""); keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(mapper); auth.authenticationProvider(keycloakAuthenticationProvider); } @Bean public KeycloakConfigResolver KeycloakConfigResolver() { return new KeycloakSpringBootConfigResolver(); } @Override protected SessionAuthenticationStrategy sessionAuthenticationStrategy() { return new RegisterSessionAuthenticationStrategy(new SessionRegistryImpl()); } }
|
[
"@Configuration(\"kieServerSecurity\") @EnableWebSecurity public class DefaultWebSecurityConfig extends WebSecurityConfigurerAdapter { @Override (1) protected void configure(HttpSecurity http) throws Exception { http .cors().and() .csrf().disable() (2) .authorizeRequests() (3) .antMatchers(\"/rest/*\").authenticated().and() .httpBasic().and() (4) .headers().frameOptions().disable(); (5) }",
"@Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication().withUser(\"user\").password(\"user\").roles(\"kie-server\"); auth.inMemoryAuthentication().withUser(\"wbadmin\").password(\"wbadmin\").roles(\"admin\"); auth.inMemoryAuthentication().withUser(\"kieserver\").password(\"kieserver1!\").roles(\"kie-server\"); }",
"@Configuration(\"kieServerSecurity\") @EnableWebSecurity public class DefaultWebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .cors().and().csrf().disable() .authorizeRequests() .antMatchers(\"/**\").hasRole(\"<role>\") .anyRequest().authenticated() .and().httpBasic() .and().headers().frameOptions().disable(); }",
"@Configuration(\"kieServerSecurity\") @EnableWebSecurity public class DefaultWebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .cors().and().csrf().disable() .authorizeRequests() .antMatchers(\"/**\").hasAnyRole(\"<role>\", \"<role1\") .anyRequest().authenticated() .and().httpBasic() .and().headers().frameOptions().disable(); }",
"@Override protected void configure(HttpSecurity http) throws Exception { http .cors().and().csrf().disable() .authorizeRequests() .antMatchers(\"/*\") .permitAll() .and().headers().frameOptions().disable(); }",
"import org.springframework.security.authentication.AnonymousAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core.AuthenticationException; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.security.web.authentication.AnonymousAuthenticationFilter; import javax.servlet.FilterChain; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import java.io.IOException; import java.util.Arrays; import java.util.Collections; import java.util.List; public class <CLASS_NAME> extends AnonymousAuthenticationFilter { private static final Logger log = LoggerFactory.getLogger(<CLASS_NAME>.class); public AnonymousAuthFilter() { super(\"PROXY_AUTH_FILTER\"); } @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { SecurityContextHolder.getContext().setAuthentication(createAuthentication((HttpServletRequest) req)); log.info(\"SecurityContextHolder pre-auth user: {}\", SecurityContextHolder.getContext()); if (log.isDebugEnabled()) { log.debug(\"Populated SecurityContextHolder with authenticated user: {}\", SecurityContextHolder.getContext().getAuthentication()); } chain.doFilter(req, res); } @Override protected Authentication createAuthentication(final HttpServletRequest request) throws AuthenticationException { log.info(\"<ANONYMOUS_USER>\"); List<? extends GrantedAuthority> authorities = Collections .unmodifiableList(Arrays.asList(new SimpleGrantedAuthority(\"<ROLE>\") )); return new AnonymousAuthenticationToken(\"ANONYMOUS\", \"<ANONYMOUS_USER>\", authorities); } }",
".unmodifiableList(Arrays.asList(new SimpleGrantedAuthority(\"<ROLE>\") , new SimpleGrantedAuthority(\"<ROLE2>\")",
"@Override protected void configure(HttpSecurity http) throws Exception { http .anonymous().authenticationFilter(new <CLASS_NAME>()).and() // Override anonymousUser .cors().and().csrf().disable() .authorizeRequests() .antMatchers(\"/*\").permitAll() .and().headers().frameOptions().disable(); }",
"<properties> <version.org.keycloak><KEYCLOAK_VERSION></version.org.keycloak> </properties>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>USD{version.org.keycloak}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> . <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-spring-boot-starter</artifactId> </dependency>",
"keycloak security setup keycloak.auth-server-url=http://localhost:8100/auth keycloak.realm=master keycloak.resource=springboot-app keycloak.public-client=true keycloak.principal-attribute=preferred_username keycloak.enable-basic-auth=true",
"import org.keycloak.adapters.KeycloakConfigResolver; import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver; import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider; import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper; import org.springframework.security.core.session.SessionRegistryImpl; import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy; import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy; @Configuration(\"kieServerSecurity\") @EnableWebSecurity public class DefaultWebSecurityConfig extends KeycloakWebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { super.configure(http); http .csrf().disable() .authorizeRequests() .anyRequest().authenticated() .and() .httpBasic(); } @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider(); SimpleAuthorityMapper mapper = new SimpleAuthorityMapper(); mapper.setPrefix(\"\"); keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(mapper); auth.authenticationProvider(keycloakAuthenticationProvider); } @Bean public KeycloakConfigResolver KeycloakConfigResolver() { return new KeycloakSpringBootConfigResolver(); } @Override protected SessionAuthenticationStrategy sessionAuthenticationStrategy() { return new RegisterSessionAuthenticationStrategy(new SessionRegistryImpl()); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/bus-app-security-con_business-applications
|
Chapter 5. Red Hat Quay repository overview
|
Chapter 5. Red Hat Quay repository overview A repository provides a central location for storing a related set of container images. These images can be used to build applications along with their dependencies in a standardized format. Repositories are organized by namespaces. Each namespace can have multiple repositories. For example, you might have a namespace for your personal projects, one for your company, or one for a specific team within your organization. Red Hat Quay provides users with access controls for their repositories. Users can make a repository public, meaning that anyone can pull, or download, the images from it, or users can make it private, restricting access to authorized users or teams. There are three ways to create a repository in Red Hat Quay: by pushing an image with the relevant podman command, by using the Red Hat Quay UI, or by using the Red Hat Quay API. Similarly, repositories can be deleted by using the UI or the proper API endpoint. 5.1. Creating a repository by using the UI Use the following procedure to create a repository using the Red Hat Quay UI. Procedure Use the following procedure to create a repository using the v2 UI. Procedure Click Repositories on the navigation pane. Click Create Repository . Select a namespace, for example, quayadmin , and then enter a Repository name , for example, testrepo . Important Do not use the following words in your repository name: * build * trigger * tag * notification When these words are used for repository names, users are unable access the repository, and are unable to permanently delete the repository. Attempting to delete these repositories returns the following error: Failed to delete repository <repository_name>, HTTP404 - Not Found. Click Create . Now, your example repository should populate under the Repositories page. Optional. Click Settings Repository visibility Make private to set the repository to private. 5.2. Creating a repository by using Podman With the proper credentials, you can push an image to a repository using Podman that does not yet exist in your Red Hat Quay instance. Pushing an image refers to the process of uploading a container image from your local system or development environment to a container registry like Red Hat Quay. After pushing an image to your registry, a repository is created. If you push an image through the command-line interface (CLI) without first creating a repository on the UI, the created repository is set to Private . Use the following procedure to create an image repository by pushing an image. Prerequisites You have download and installed the podman CLI. You have logged into your registry. You have pulled an image, for example, busybox. Procedure Pull a sample page from an example registry. For example: USD sudo podman pull busybox Example output Trying to pull docker.io/library/busybox... Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 Tag the image on your local system with the new repository and image name. For example: USD sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test Push the image to the registry. Following this step, you can use your browser to see the tagged image in your repository. USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test Example output Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 5.3. Creating a repository by using the API Use the following procedure to create an image repository using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a repository using the POST /api/v1/repository endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "repository": "<new_repository_name>", "visibility": "<private>", "description": "<This is a description of the new repository>." }' \ "https://quay-server.example.com/api/v1/repository" Example output {"namespace": "quayadmin", "name": "<new_repository_name>", "kind": "image"} 5.4. Deleting a repository by using the UI You can delete a repository directly on the UI. Prerequisites You have created a repository. Procedure On the Repositories page of the v2 UI, check the box of the repository that you want to delete, for example, quayadmin/busybox . Click the Actions drop-down menu. Click Delete . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. 5.5. Deleting a repository by using the Red Hat Quay API Use the following procedure to delete a repository using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to delete a repository using the DELETE /api/v1/repository/{repository} endpoint: USD curl -X DELETE -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" The CLI does not return information when deleting a repository from the CLI. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/repository/{repository} command to see if details are returned for the deleted repository: USD curl -X GET -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" Example output {"detail": "Not Found", "error_message": "Not Found", "error_type": "not_found", "title": "not_found", "type": "http://quay-server.example.com/api/v1/error/not_found", "status": 404}
|
[
"sudo podman pull busybox",
"Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9",
"sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test",
"Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<private>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"",
"{\"namespace\": \"quayadmin\", \"name\": \"<new_repository_name>\", \"kind\": \"image\"}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://quay-server.example.com/api/v1/error/not_found\", \"status\": 404}"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/use_red_hat_quay/use-quay-create-repo
|
4.106. java-1.5.0-ibm
|
4.106. java-1.5.0-ibm 4.106.1. RHSA-2012:0508 - Critical: java-1.5.0-ibm security update Updated java-1.5.0-ibm packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IBM 1.5.0 Java release includes the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. Security Fixes CVE-2011-3389 , CVE-2011-3557 , CVE-2011-3560 , CVE-2011-3563 , CVE-2012-0498 , CVE-2012-0499 , CVE-2012-0501 , CVE-2012-0502 , CVE-2012-0503 , CVE-2012-0505 , CVE-2012-0506 , CVE-2012-0507 This update fixes several vulnerabilities in the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. Detailed vulnerability descriptions are linked from the IBM "Security alerts" page. All users of java-1.5.0-ibm are advised to upgrade to these updated packages, containing the IBM 1.5.0 SR13-FP1 Java release. All running instances of IBM Java must be restarted for this update to take effect.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/java-1_5_0-ibm
|
2.6. Using NetworkManager with Network Scripts
|
2.6. Using NetworkManager with Network Scripts This section describes how to run a script and how to use custom commands in network scripts. The term network scripts refers to the script /etc/init.d/network and any other installed scripts it calls. Although NetworkManager provides the default networking service, scripts and NetworkManager can run in parallel and work together. Red Hat recommends to test them first. Running Network Script Run a network script only with the systemctl command: systemctl start|stop|restart|status network The systemctl utility clears any existing environment variables and ensures correct execution. In Red Hat Enterprise Linux 7, NetworkManager is started first, and /etc/init.d/network checks with NetworkManager to avoid tampering with NetworkManager 's connections. NetworkManager is intended to be the primary application using sysconfig configuration files, and /etc/init.d/network is intended to be secondary. The /etc/init.d/network script runs: manually - using one of the systemctl commands start|stop|restart network , or on boot and shutdown if the network service is enabled - as a result of the systemctl enable network command. It is a manual process and does not react to events that happen after boot. Users can also call the ifup and ifdown scripts manually. Note The systemctl reload network.service command does not work due to technical limitations of initscripts. To apply a new configuration for the network service, use the restart command: This brings down and brings up all the Network Interface Cards (NICs) to load the new configuration. For more information, see the Red Hat Knowledgebase solution Reload and force-reload options for network service . Using Custom Commands in Network Scripts Custom commands in the /sbin/ifup-local , ifdown-pre-local , and ifdown-local scripts are only executed if these devices are controlled by the /etc/init.d/network service. The ifup-local file does not exist by default. If required, create it under the /sbin/ directory. The ifup-local script is readable only by the initscripts and not by NetworkManager . To run a custom script using NetworkManager , create it under the dispatcher.d/ directory. See the section called "Running Dispatcher scripts" . Important Modifying any files included with the initscripts package or related rpms is not recommended. If a user modifies such files, Red Hat does not provide support. Custom tasks can run when network connections go up and down, both with the old network scripts and with NetworkManager . If NetworkManager is enabled, the ifup and ifdown script will ask NetworkManager whether NetworkManager manages the interface in question, which is found from the " DEVICE= " line in the ifcfg file. Devices managed by NetworkManager : calling ifup When you call ifup and the device is managed by NetworkManager , there are two options: If the device is not already connected, then ifup asks NetworkManager to start the connection. If the device is already connected, then nothing to do. calling ifdown When you call ifdown and the device is managed by NetworkManager : ifdown asks NetworkManager to terminate the connection. Devices unmanaged by NetworkManager : If you call either ifup or ifdown , the script starts the connection using the older, non-NetworkManager mechanism that it has used since the time before NetworkManager existed. Running Dispatcher scripts NetworkManager provides a way to run additional custom scripts to start or stop services based on the connection status. By default, the /etc/NetworkManager/dispatcher.d/ directory exists and NetworkManager runs scripts there, in alphabetical order. Each script must be an executable file owned by root and must have write permission only for the file owner. For more information about running NetworkManager dispatcher scripts, see the Red Hat Knowledgebase solution How to write a NetworkManager dispatcher script to apply ethtool commands .
|
[
"~]# systemctl restart network.service"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-using_networkmanager_with_network_scripts
|
Chapter 2. Using Control Groups
|
Chapter 2. Using Control Groups As explained in Chapter 3, Subsystems and Tunable Parameters , control groups and the subsystems to which they relate can be manipulated using shell commands and utilities. However, the easiest way to work with cgroups is to install the libcgroup package, which contains a number of cgroup-related command line utilities and their associated man pages. It is possible to mount hierarchies and set cgroup parameters (non-persistently) using shell commands and utilities available on any system. However, using the libcgroup -provided utilities simplifies the process and extends your capabilities. Therefore, this guide focuses on libcgroup commands throughout. In most cases, we have included the equivalent shell commands to help describe the underlying mechanism. However, we recommend that you use the libcgroup commands wherever practical. Note In order to use cgroups, first ensure the libcgroup package is installed on your system by running, as root: 2.1. The cgconfig Service The cgconfig service installed with the libcgroup package provides a convenient way to create hierarchies, attach subsystems to hierarchies, and manage cgroups within those hierarchies. It is recommended that you use cgconfig to manage hierarchies and cgroups on your system. The cgconfig service is not started by default on Red Hat Enterprise Linux 6. When you start the service with chkconfig , it reads the cgroup configuration file - /etc/cgconfig.conf . Cgroups are therefore recreated from session to session and remain persistent. Depending on the contents of the configuration file, cgconfig can create hierarchies, mount necessary file systems, create cgroups, and set subsystem parameters for each group. The default /etc/cgconfig.conf file installed with the libcgroup package creates and mounts an individual hierarchy for each subsystem, and attaches the subsystems to these hierarchies. The cgconfig service also allows to create configuration files in the /etc/cgconfig.d/ directory and to invoke them from /etc/cgconfig.conf . If you stop the cgconfig service (with the service cgconfig stop command), it unmounts all the hierarchies that it mounted. 2.1.1. The /etc/cgconfig.conf File The /etc/cgconfig.conf file contains two major types of entries - mount and group . Mount entries create and mount hierarchies as virtual file systems, and attach subsystems to those hierarchies. Mount entries are defined using the following syntax: The libcgroup package automatically creates a default /etc/cgconfig.conf file when it is installed. The default configuration file looks as follows: The subsystems listed in the above configuration are automatically mounted to their respective hierarchies under the /cgroup/ directory. It is recommended to use these default hierarchies for specifying control groups. However, in certain cases you may need to create hierarchies manually, for example when they were deleted before, or it is beneficial to have a single hierarchy for multiple subsystems (as in Section 4.3, "Per-group Division of CPU and Memory Resources" ). Note that multiple subsystems can be mounted to a single hierarchy, but each subsystem can be mounted only once. See Example 2.1, "Creating a mount entry" for an example of creating a hierarchy. Example 2.1. Creating a mount entry The following example creates a hierarchy for the cpuset subsystem: the equivalent of the shell commands: Since each subsystem can be mounted only once, the above commands would fail if cpuset is already mounted. Group entries create cgroups and set subsystem parameters. Group entries are defined using the following syntax: Note that the permissions section is optional. To define permissions for a group entry, use the following syntax: See Example 2.2, "Creating a group entry" for example usage: Example 2.2. Creating a group entry The following example creates a cgroup for SQL daemons, with permissions for users in the sqladmin group to add tasks to the cgroup and the root user to modify subsystem parameters: When combined with the example of the mount entry in Example 2.1, "Creating a mount entry" , the equivalent shell commands are: Note You must restart the cgconfig service for the changes in the /etc/cgconfig.conf file to take effect. However, note that restarting this service causes the entire cgroup hierarchy to be rebuilt, which removes any previously existing cgroups (for example, any existing cgroups used by libvirtd ). To restart the cgconfig service, use the following command: When you install the libcgroup package, a sample configuration file is written to /etc/cgconfig.conf . The hash symbols (' # ') at the start of each line comment that line out and make it invisible to the cgconfig service. 2.1.2. The /etc/cgconfig.d/ Directory The /etc/cgconfig.d/ directory is reserved for storing configuration files for specific applications and use cases. These files should be created with the .conf suffix and adhere to the same syntax rules as /etc/cgconfig.conf . The cgconfig service first parses the /etc/cgconfig.conf file and then continues with files in the /etc/cgconfig.d/ directory. Note that the order of file parsing is not defined, because it does not make a difference provided that each configuration file is unique. Therefore, do not define the same group or template in multiple configuration files, otherwise they would interfere with each other. Storing specific configuration files in a separate directory makes them easily reusable. If an application is shipped with a dedicated configuration file, you can easily set up cgroups for the application just by copying its configuration file to /etc/cgconfig.d/ .
|
[
"~]# yum install libcgroup",
"mount { subsystem = /cgroup/ hierarchy ; ... }",
"mount { cpuset = /cgroup/cpuset; cpu = /cgroup/cpu; cpuacct = /cgroup/cpuacct; memory = /cgroup/memory; devices = /cgroup/devices; freezer = /cgroup/freezer; net_cls = /cgroup/net_cls; blkio = /cgroup/blkio; }",
"mount { cpuset = /cgroup/red; }",
"~]# mkdir /cgroup/red ~]# mount -t cgroup -o cpuset red /cgroup/red",
"group <name> { [ <permissions> ] <controller> { <param name> = <param value> ; ... } ... }",
"perm { task { uid = <task user> ; gid = <task group> ; } admin { uid = <admin name> ; gid = <admin group> ; } }",
"group daemons { cpuset { cpuset.mems = 0; cpuset.cpus = 0; } } group daemons/sql { perm { task { uid = root; gid = sqladmin; } admin { uid = root; gid = root; } } cpuset { cpuset.mems = 0; cpuset.cpus = 0; } }",
"~]# mkdir -p /cgroup/red/daemons/sql ~]# chown root:root /cgroup/red/daemons/sql/* ~]# chown root:sqladmin /cgroup/red/daemons/sql/tasks ~]# echo USD(cgget -n -v -r cpuset.mems /) > /cgroup/red/daemons/cpuset.mems ~]# echo USD(cgget -n -v -r cpuset.cpus /) > /cgroup/red/daemons/cpuset.cpus ~]# echo 0 > /cgroup/red/daemons/sql/cpuset.mems ~]# echo 0 > /cgroup/red/daemons/sql/cpuset.cpus",
"~]# service cgconfig restart"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/ch-Using_Control_Groups
|
Chapter 10. Authentication for Enrolling Certificates
|
Chapter 10. Authentication for Enrolling Certificates This chapter covers how to enroll end entity certificates, how to create and manage server certificates, the authentication methods available in the Certificate System to use when enrolling end entity certificates, and how to set up those authentication methods. Enrollment is the process of issuing certificates to an end entity. The process is creating and submitting the request, authenticating the user requesting it, and then approving the request and issuing the certificate. The method used to authenticate the end entity determines the entire enrollment process. There are three ways that the Certificate System can authenticate an entity: In agent-approved enrollment, end-entity requests are sent to an agent for approval. The agent approves the certificate request. In automatic enrollment, end-entity requests are authenticated using a plug-in, and then the certificate request is processed; an agent is not involved in the enrollment process. In CMC enrollment , a third party application can create a request that is signed by an agent and then automatically processed. A Certificate Manager is initially configured for agent-approved enrollment and for CMC authentication. Automated enrollment is enabled by configuring one of the authentication plug-in modules. More than one authentication method can be configured in a single instance of a subsystem. Note An email can be automatically sent to an end entity when the certificate is issued for any authentication method by configuring automated notifications. See Chapter 12, Using Automated Notifications for more information on notifications. 10.1. Configuring Agent-Approved Enrollment The Certificate Manager is initially configured for agent-approved enrollment. An end entity makes a request which is sent to the agent queue for an agent's approval. An agent can modify request, change the status of the request, reject the request, or approve the request. Once the request is approved, the signed request is sent to the Certificate Manager for processing. The Certificate Manager processes the request and issues the certificate. The agent-approved enrollment method is not configurable. If a Certificate Manager is not configured for any other enrollment method, the server automatically sends all certificate-related requests to a queue where they await agent approval. This ensures that all requests that lack authentication credentials are sent to the request queue for agent approval. To use agent-approved enrollment, leave the authentication method blank in the profile's .cfg file. For example:
|
[
"auth.instance_id="
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Authentication_for_Enrolling_Certificates
|
Chapter 6. Network connections
|
Chapter 6. Network connections 6.1. Automatic failover A client can receive information about all master and slave brokers, so that in the event of a connection failure, it can reconnect to the slave broker. The slave broker then automatically re-creates any sessions and consumers that existed on each connection before failover. This feature saves you from having to hand-code manual reconnection logic in your applications. When a session is recreated on the slave, it does not have any knowledge of messages already sent or acknowledged. Any in-flight sends or acknowledgements at the time of failover might also be lost. However, even without transparent failover, it is simple to guarantee once and only once delivery, even in the case of failure, by using a combination of duplicate detection and retrying of transactions. Clients detect connection failure when they have not received packets from the broker within a configurable period of time. See Section 6.3, "Detecting dead connections" for more information. You have a number of methods to configure clients to receive information about master and slave. One option is to configure clients to connect to a specific broker and then receive information about the other brokers in the cluster. See Section 6.7, "Configuring static discovery" for more information. The most common way, however, is to use broker discovery . For details on how to configure broker discovery, see Section 6.6, "Configuring dynamic discovery" . Also, you can configure the client by adding parameters to the query string of the URI used to connect to the broker, as in the example below. Procedure To configure your clients for failover through the use of a query string, ensure the following components of the URI are set properly: The host:port portion of the URI must point to a master broker that is properly configured with a backup. This host and port is used only for the initial connection. The host:port value has nothing to do with the actual connection failover between a live and a backup server. In the example above, localhost:61616 is used for the host:port . (Optional) To use more than one broker as a possible initial connection, group the host:port entries as in the following example: Include the name-value pair ha=true as part of the query string to ensure the client receives information about each master and slave broker in the cluster. Include the name-value pair reconnectAttempts=n , where n is an integer greater than 0. This parameter sets the number of times the client attempts to reconnect to a broker. Note Failover occurs only if ha=true and reconnectAttempts is greater than 0. Also, the client must make an initial connection to the master broker in order to receive information about other brokers. If the initial connection fails, the client can only retry to establish it. See Section 6.1.1, "Failing over during the initial connection" for more information. 6.1.1. Failing over during the initial connection Because the client does not receive information about every broker until after the first connection to the HA cluster, there is a window of time where the client can connect only to the broker included in the connection URI. Therefore, if a failure happens during this initial connection, the client cannot failover to other master brokers, but can only try to re-establish the initial connection. Clients can be configured for a set number of reconnection attempts. Once the number of attempts has been made, an exception is thrown. Setting the number of reconnection attempts The examples below shows how to set the number of reconnection attempts to 3 using the AMQ Core Protocol JMS client. The default value is 0, that is, try only once. Procedure Set the number of reconnection attempts by passing a value to ServerLocator.setInitialConnectAttempts() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setInitialConnectAttempts(3); Setting a global number of reconnection attempts Alternatively, you can apply a global value for the maximum number of reconnection attempts within the broker's configuration. The maximum is applied to all client connections. Procedure Edit <broker-instance-dir>/etc/broker.xml by adding the initial-connect-attempts configuration element and providing a value for the time-to-live, as in the example below. <configuration> <core> ... <initial-connect-attempts>3</initial-connect-attempts> 1 ... </core> </configuration> 1 All clients connecting to the broker are allowed a maximum of three attempts to reconnect. The default is -1, which allows clients unlimited attempts. 6.1.2. Handling blocking calls during failover When failover occurs and the client is waiting for a response from the broker to continue its execution, the newly created session does not have any knowledge of the call that was in progress. The initial call might otherwise hang forever, waiting for a response that never comes. To prevent this, the broker is designed to unblock any blocking calls that were in progress at the time of failover by making them throw an exception. Client code can catch these exceptions and retry any operations if desired. When using AMQ Core Protocol JMS clients, if the unblocked method is a call to commit() or prepare() , the transaction is automatically rolled back and the broker throws an exception. 6.1.3. Handling failover with transactions When using AMQ Core Protocol JMS clients, if the session is transactional and messages have already been sent or acknowledged in the current transaction, the broker cannot be sure that those messages or their acknowledgements were lost during the failover. Consequently, the transaction is marked for rollback only. Any subsequent attempt to commit it throws an javax.jms.TransactionRolledBackException . Warning The caveat to this rule is when XA is used. If a two-phase commit is used and prepare() has already been called, rolling back could cause a HeuristicMixedException . Because of this, the commit throws an XAException.XA_RETRY exception, which informs the Transaction Manager it should retry the commit at some later point. If the original commit has not occurred, it still exists and can be committed. If the commit does not exist, it is assumed to have been committed, although the transaction manager might log a warning. A side effect of this exception is that any nonpersistent messages are lost. To avoid such losses, always use persistent messages when using XA. This is not an issue with acknowledgements since they are flushed to the broker before prepare() is called. The AMQ Core Protocol JMS client code must catch the exception and perform any necessary client side rollback. There is no need to roll back the session, however, because it was already rolled back. The user can then retry the transactional operations again on the same session. If failover occurs when a commit call is being executed, the broker unblocks the call to prevent the AMQ Core Protocol JMS client from waiting indefinitely for a response. Consequently, the client cannot determine whether the transaction commit was actually processed on the master broker before failure occurred. To remedy this, the AMQ Core Protocol JMS client can enable duplicate detection in the transaction, and retry the transaction operations again after the call is unblocked. If the transaction was successfully committed on the master broker before failover, duplicate detection ensures that any durable messages present in the transaction when it is retried are ignored on the broker side. This prevents messages from being sent more than once. If the session is non transactional, messages or acknowledgements can be lost in case of failover. If you want to provide once and only once delivery guarantees for non transacted sessions, enable duplicate detection and catch unblock exceptions. 6.1.4. Getting notified of connection failure JMS provides a standard mechanism for getting notified asynchronously of connection failure: java.jms.ExceptionListener . Any ExceptionListener or SessionFailureListener instance is always called by the broker if a connection failure occurs, whether the connection was successfully failed over, reconnected, or reattached. You can find out if a reconnect or a reattach has happened by examining the failedOver flag passed in on the connectionFailed on SessionFailureListener . Alternatively, you can inspect the error code of the javax.jms.JMSException , which can be one of the following: Table 6.1. JMSException error codes Error code Description FAILOVER Failover has occurred and the broker has successfully reattached or reconnected DISCONNECT No failover has occurred and the broker is disconnected 6.2. Application-level failover In some cases you might not want automatic client failover, but prefer to code your own reconnection logic in a failure handler instead. This is known as application-level failover, since the failover is handled at the application level. To implement application-level failover when using JMS, set an ExceptionListener class on the JMS connection. The ExceptionListener is called by the broker in the event that a connection failure is detected. In your ExceptionListener , you should close your old JMS connections. You might also want to look up new connection factory instances from JNDI and create new connections. 6.3. Detecting dead connections As long as the it is receiving data from the broker, the client considers a connection to be alive. Configure the client to check its connection for failure by providing a value for the client-failure-check-period property. The default check period for a network connection is 30,000 milliseconds, or 30 seconds, while the default value for an in-VM connection is -1, which means the client never fails the connection from its side if no data is received. Typically, you set the check period to be much lower than the value used for the broker's connection time-to-live, which ensures that clients can reconnect in case of a temporary failure. Setting the check period for detecting dead connections The examples below show how to set the check period to 10,000 milliseconds. Procedure If you are using JNDI, set the check period within the JNDI context environment, jndi.properties , for example, as below. If you are not using JNDI, set the check period directly by passing a value to ActiveMQConnectionFactory.setClientFailureCheckPeriod() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setClientFailureCheckPeriod(10000); 6.4. Configuring time-to-live By default clients can set a time-to-live (TTL) for their own connections. The examples below show you how to set the TTL. Procedure If you are using JNDI to instantiate your connection factory, you can specify it in the xml config, using the parameter connectionTtl . If you are not using JNDI, the connection TTL is defined by the ConnectionTTL attribute on a ActiveMQConnectionFactory instance. ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConnectionTTL(30000); 6.5. Closing connections A client application must close its resources in a controlled manner before it exits to prevent dead connections from occurring. In Java, it is recommended to close connections inside a finally block: Connection jmsConnection = null; try { ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(...); jmsConnection = jmsConnectionFactory.createConnection(); ...use the connection... } finally { if (jmsConnection != null) { jmsConnection.close(); } } 6.6. Configuring dynamic discovery You can configure AMQ Core Protocol JMS to discover a list of brokers when attempting to establish a connection. If you are using JNDI on the client to look up your JMS connection factory instances, you can specify these parameters in the JNDI context environment. Typically the parameters are defined in a file named jndi.properties . The host and part in the URI for the connection factory should match the group-address and group-port from the corresponding broadcast-group inside broker's broker.xml configuration file. Below is an example of a jndi.properties file configured to connect to a broker's discovery group. When this connection factory is downloaded from JNDI by a client application and JMS connections are created from it, those connections will be load-balanced across the list of servers that the discovery group maintains by listening on the multicast address specified in the broker's discovery group configuration. As an alternative to using JNDI, you can use specify the discovery group parameters directly in your Java code when creating the JMS connection factory. The code below provides an example of how to do this. final String groupAddress = "231.7.7.7"; final int groupPort = 9876; DiscoveryGroupConfiguration discoveryGroupConfiguration = new DiscoveryGroupConfiguration(); UDPBroadcastEndpointFactory udpBroadcastEndpointFactory = new UDPBroadcastEndpointFactory(); udpBroadcastEndpointFactory.setGroupAddress(groupAddress).setGroupPort(groupPort); discoveryGroupConfiguration.setBroadcastEndpointFactory(udpBroadcastEndpointFactory); ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA (discoveryGroupConfiguration, JMSFactoryType.CF); Connection jmsConnection1 = jmsConnectionFactory.createConnection(); Connection jmsConnection2 = jmsConnectionFactory.createConnection(); The refresh timeout can be set directly on the DiscoveryGroupConfiguration by using the setter method setRefreshTimeout() . The default value is 10000 milliseconds. On first usage, the connection factory will make sure it waits this long since creation before creating the first connection. The default wait time is 10000 milliseconds, but you can change it by passing a new value to DiscoveryGroupConfiguration.setDiscoveryInitialWaitTimeout() . 6.7. Configuring static discovery Sometimes it may be impossible to use UDP on the network you are using. In this case you can configure a connection with an initial list of possible servers. The list can be just one broker that you know will always be available, or a list of brokers where at least one will be available. This does not mean that you have to know where all your servers are going to be hosted. You can configure these servers to use the reliable servers to connect to. After they are connected, their connection details will be propagated from the server to the client. If you are using JNDI on the client to look up your JMS connection factory instances, you can specify these parameters in the JNDI context environment. Typically the parameters are defined in a file named jndi.properties . Below is an example jndi.properties file that provides a static list of brokers instead of using dynamic discovery. When the above connection factory is used by a client, its connections will be load-balanced across the list of brokers defined within the parentheses () . If you are instantiating the JMS connection factory directly, you can specify the connector list explicitly when creating the JMS connection factory, as in the example below. HashMap<String, Object> map = new HashMap<String, Object>(); map.put("host", "myhost"); map.put("port", "61616"); TransportConfiguration broker1 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map); HashMap<String, Object> map2 = new HashMap<String, Object>(); map2.put("host", "myhost2"); map2.put("port", "61617"); TransportConfiguration broker2 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map2); ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithHA (JMSFactoryType.CF, broker1, broker2); 6.8. Configuring a broker connector Connectors define how clients can connect to the broker. You can configure them from the client using the JMS connection factory. Map<String, Object> connectionParams = new HashMap<String, Object>(); connectionParams.put(org.apache.activemq.artemis.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 61617); TransportConfiguration transportConfiguration = new TransportConfiguration( "org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory", connectionParams); ConnectionFactory connectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration); Connection jmsConnection = connectionFactory.createConnection();
|
[
"connectionFactory.ConnectionFactory=tcp://localhost:61616?ha=true&reconnectAttempts=3",
"connectionFactory.ConnectionFactory=(tcp://host1:port,tcp://host2:port)?ha=true&reconnectAttempts=3",
"ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setInitialConnectAttempts(3);",
"<configuration> <core> <initial-connect-attempts>3</initial-connect-attempts> 1 </core> </configuration>",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?clientFailureCheckPeriod=10000",
"ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setClientFailureCheckPeriod(10000);",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?connectionTtl=30000",
"ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConnectionTTL(30000);",
"Connection jmsConnection = null; try { ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(...); jmsConnection = jmsConnectionFactory.createConnection(); ...use the connection } finally { if (jmsConnection != null) { jmsConnection.close(); } }",
"java.naming.factory.initial = ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=udp://231.7.7.7:9876",
"final String groupAddress = \"231.7.7.7\"; final int groupPort = 9876; DiscoveryGroupConfiguration discoveryGroupConfiguration = new DiscoveryGroupConfiguration(); UDPBroadcastEndpointFactory udpBroadcastEndpointFactory = new UDPBroadcastEndpointFactory(); udpBroadcastEndpointFactory.setGroupAddress(groupAddress).setGroupPort(groupPort); discoveryGroupConfiguration.setBroadcastEndpointFactory(udpBroadcastEndpointFactory); ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA (discoveryGroupConfiguration, JMSFactoryType.CF); Connection jmsConnection1 = jmsConnectionFactory.createConnection(); Connection jmsConnection2 = jmsConnectionFactory.createConnection();",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=(tcp://myhost:61616,tcp://myhost2:61616)",
"HashMap<String, Object> map = new HashMap<String, Object>(); map.put(\"host\", \"myhost\"); map.put(\"port\", \"61616\"); TransportConfiguration broker1 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map); HashMap<String, Object> map2 = new HashMap<String, Object>(); map2.put(\"host\", \"myhost2\"); map2.put(\"port\", \"61617\"); TransportConfiguration broker2 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map2); ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithHA (JMSFactoryType.CF, broker1, broker2);",
"Map<String, Object> connectionParams = new HashMap<String, Object>(); connectionParams.put(org.apache.activemq.artemis.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 61617); TransportConfiguration transportConfiguration = new TransportConfiguration( \"org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory\", connectionParams); ConnectionFactory connectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration); Connection jmsConnection = connectionFactory.createConnection();"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html/using_amq_core_protocol_jms/network_connections
|
10.5. Configure 802.1Q VLAN Tagging Using a GUI
|
10.5. Configure 802.1Q VLAN Tagging Using a GUI 10.5.1. Establishing a VLAN Connection You can use nm-connection-editor to create a VLAN using an existing interface as the parent interface. Note that VLAN devices are only created automatically if the parent interface is set to connect automatically. Procedure 10.1. Adding a New VLAN Connection Using nm-connection-editor Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select VLAN and click Create . The Editing VLAN connection 1 window appears. On the VLAN tab, select the parent interface from the drop-down list you want to use for the VLAN connection. Enter the VLAN ID Enter a VLAN interface name. This is the name of the VLAN interface that will be created. For example, enp1s0.1 or vlan2 . (Normally this is either the parent interface name plus " . " and the VLAN ID, or " vlan " plus the VLAN ID.) Review and confirm the settings and then click the Save button. To edit the VLAN-specific settings see Section 10.5.1.1, "Configuring the VLAN Tab" . Figure 10.3. Adding a New VLAN Connection Using nm-connection-editor Procedure 10.2. Editing an Existing VLAN Connection Follow these steps to edit an existing VLAN connection. Enter nm-connection-editor in a terminal: Select the connection you want to edit and click the Edit button. Select the General tab. Configure the connection name, auto-connect behavior, and availability settings. These settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the VLAN section of the Network window. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. Refer to the section called "Editing an Existing Connection with control-center" for more information. Available to all users - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. Refer to Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. To edit the VLAN-specific settings see Section 10.5.1.1, "Configuring the VLAN Tab" . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your VLAN connection, click the Save button to save your customized configuration. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" . or IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . 10.5.1.1. Configuring the VLAN Tab If you have already added a new VLAN connection (see Procedure 10.1, "Adding a New VLAN Connection Using nm-connection-editor" for instructions), you can edit the VLAN tab to set the parent interface and the VLAN ID. Parent Interface A previously configured interface can be selected in the drop-down list. VLAN ID The identification number to be used to tag the VLAN network traffic. VLAN interface name The name of the VLAN interface that will be created. For example, enp1s0.1 or vlan2 . Cloned MAC address Optionally sets an alternate MAC address to use for identifying the VLAN interface. This can be used to change the source MAC address for packets sent on this VLAN. MTU Optionally sets a Maximum Transmission Unit (MTU) size to be used for packets to be sent over the VLAN connection.
|
[
"~]USD nm-connection-editor",
"~]USD nm-connection-editor"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configure_802_1Q_VLAN_Tagging_Using_a_GUI
|
Chapter 8. Ceph File System snapshots
|
Chapter 8. Ceph File System snapshots As a storage administrator, you can take a point-in-time snapshot of a Ceph File System (CephFS) directory. CephFS snapshots are asynchronous, and you can choose which directory snapshots are created in. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. 8.1. Ceph File System snapshots The Ceph File System (CephFS) snapshotting feature is enabled by default on new Ceph File Systems, but must be manually enabled on existing Ceph File Systems. CephFS snapshots create an immutable, point-in-time view of a Ceph File System. CephFS snapshots are asynchronous and are kept in a special hidden directory in the CephFS directory named .snap . You can specify snapshot creation for any directory within a Ceph File System. When specifying a directory, the snapshot also includes all the subdirectories beneath it. Warning Each Ceph Metadata Server (MDS) cluster allocates the snap identifiers independently. Using snapshots for multiple Ceph File Systems that are sharing a single pool causes snapshot collisions, and results in missing file data. Additional Resources See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. See the Creating a snapshot schedule for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. 8.2. Creating a snapshot for a Ceph File System You can create an immutable, point-in-time view of a Ceph File System (CephFS) by creating a snapshot. Note For a new Ceph File System, snapshots are enabled by default. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Metadata Server (MDS) node. Procedure Log into the Cephadm shell: Example For existing Ceph File Systems, enable the snapshotting feature: Syntax Example Create a new snapshot subdirectory under the .snap directory: Syntax Example This example creates the new-snaps subdirectory, and this informs the Ceph Metadata Server (MDS) to start making snapshots. To delete snapshots: Syntax Example Important Attempting to delete root-level snapshots, which might contain underlying snapshots, will fail. Additional Resources See the Ceph File System snapshot schedules section in the Red Hat Ceph Storage File System Guide for more details. See the Ceph File System snapshots section in the Red Hat Ceph Storage File System Guide for more details. See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide .
|
[
"cephadm shell",
"ceph fs set FILE_SYSTEM_NAME allow_new_snaps true",
"ceph fs set cephfs01 allow_new_snaps true",
"mkdir NEW_DIRECTORY_PATH",
"mkdir /.snap/new-snaps",
"rmdir NEW_DIRECTORY_PATH",
"rmdir /.snap/new-snaps"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/ceph-file-system-snapshots
|
Chapter 27. Configuration: Defining Access Control for IdM Users
|
Chapter 27. Configuration: Defining Access Control for IdM Users Access control is a security system which defines who can access certain resources - from machines to services to entries - and what kinds of operations they are allowed to perform. Identity Management provides several access control areas to make it very clear what kind of access is being granted and to whom it is granted. As part of this, Identity Management draws a distinction between access controls to resources within the domain and access control to the IdM configuration itself. This chapter details the different internal access control mechanisms that are available for users within IdM to the IdM server and other IdM users. 27.1. About Access Controls for IdM Entries Access control defines the rights or permissions users have been granted to perform operations on other users or objects. 27.1.1. A Brief Look at Access Control Concepts The Identity Management access control structure is based on standard LDAP access controls. Access within the IdM server is based on the IdM users (who are stored in the backend Directory Server instance) who are allowed to access other IdM entities (which are also stored as LDAP entries in the Directory Server instance). An access control instruction (ACI) has three parts: Who can perform the operation . This is the entity who is being granted permission to do something; this is the actor. In LDAP access control models, this is called the bind rule because it defines who the user is (based on their bind information) and can optionally require other limits on the bind attempt, such as restricting attempts to a certain time of day or a certain machine. What can be accessed . This defines the entry which the actor is allowed to perform operations on. This is the target of the access control rule. What type of operation can be performed . The last part is determining what kinds of actions the user is allowed to perform. The most common operations are add, delete, write, read, and search. In Identity Management, all users are implicitly granted read and search rights to all entries in the IdM domain, with restrictions only for sensitive attributes like passwords and Kerberos keys. (Anonymous users are restricted from seeing security-related configuration, like sudo rules and host-based access control.) The only rights which can be granted are add, delete, and write - the permissions required to modify an entry. When any operation is attempted, the first thing that the IdM client does is send user credentials as part of the bind operation. The backend Directory Server checks those user credentials and then checks the user account to see if the user has permission to perform the requested operation. 27.1.2. Access Control Methods in Identity Management To make access control rules simple and clear to implement, Identity Management divides access control definitions into three categories: Self-service rules , which define what operations a user can perform on his own personal entry. The access control type only allows write permissions to attributes within the entry; it does not allow add or delete operations for the entry itself. Delegation rules , which allow a specific user group to perform write (edit) operations on specific attributes for users in another user group. Like self-service rules, this form of access control rule is limited to editing the values of specific attributes; it does not grant the ability to add or remove whole entries or control over unspecified attributes. Role-based access control , which creates special access control groups which are then granted much broader authority over all types of entities in the IdM domain. Roles can be granted edit, add, and delete rights, meaning they can be granted complete control over entire entries, not just selected attributes. Some roles are already created and available within Identity Management. Special roles can be created to manage any type of entry in specific ways, such as hosts, automount configuration, netgroups, DNS settings, and IdM configuration.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/server-access-controls
|
Role APIs
|
Role APIs OpenShift Container Platform 4.18 Reference guide for role APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/role_apis/index
|
Appendix A. Red Hat Customer Portal Labs Relevant to Storage Administration
|
Appendix A. Red Hat Customer Portal Labs Relevant to Storage Administration Red Hat Customer Portal Labs are tools designed to help you improve performance, troubleshoot issues, identify security problems, and optimize configuration. This appendix provides an overview of Red Hat Customer Portal Labs relevant to storage administration. All Red Hat Customer Portal Labs are available at https://access.redhat.com/labs/ . SCSI decoder The SCSI decoder is designed to decode SCSI error messages in the /log/* files or log file snippets, as these error messages can be hard to understand for the user. Use the SCSI decoder to individually diagnose each SCSI error message and get solutions to resolve problems efficiently. File System Layout Calculator The File System Layout Calculator determines the optimal parameters for creating ext3, ext4, and xfs file systems, after you provide storage options that describe your current or planned storage. Move the cursor over the question mark ("?") for a brief explanation of a particular option, or scroll down to read a summary of all options. Use the File System Layout Calculator to generate a command that creates a file system with provided parameters on the specified RAID storage. Copy the generated command and execute it as root to create the required file system. LVM RAID Calculator The LVM RAID Calculator determines the optimal parameters for creating logical volumes (LVMs) on a given RAID storage after you specify storage options. Move the cursor over the question mark ("?") for a brief explanation of a particular option, or scroll down to read a summary of all options. The LVM RAID Calculator generates a sequence of commands that create LVMs on a given RAID storage. Copy and execute the generated commands one by one as root to create the required LVMs. iSCSI Helper The iSCSI Helper provides a block-level storage over Internet Protocol (IP) networks, and enables the use of storage pools within server virtualization. Use the iSCSI Helper to generate a script that prepares the system for its role of an iSCSI target (server) or an iSCSI initiator (client) configured according to the settings that you provide. Samba Configuration Helper The Samba Configuration Helper creates a configuration that provides basic file and printer sharing through Samba: Click Server to specify basic server settings. Click Shares to add the directories that you want to share Click Server to add attached printers individually. Multipath Helper The Multipath Helper creates an optimal configuration for multipath devices on Red Hat Enterprise Linux 5, 6, and 7. By following the steps, you can create advanced multipath configurations, such as custom aliases or device blacklists. The Multipath Helper also provides the multipath.conf file for a review. When you achieve the required configuration, download the installation script to run on your server. NFS Helper The NFS Helper simplifies configuring a new NFS server or client. Follow the steps to specify the export and mount options. Then, generate a downloadable NFS configuration script. Multipath Configuration Visualizer The Multipath Configuration Visualizer analyzes files in a sosreport and provides a diagram that visualizes the multipath configuration. Use the Multipath Configuration Visualizer to display: Hosts components including Host Bus Adapters (HBAs), local devices, and iSCSI devices on the server side Storage components on the storage side Fabric or Ethernet components between the server and the storage Paths to all mentioned components You can either upload a sosreport compressed in the .xz, .gz, or .bz2 format, or extract a sosreport in a directory that you then select as the source for a client-side analysis. RHEL Backup and Restore Assistant The RHEL Backup and Restore Assistant provides information on back-up and restore tools, and common scenarios of Linux usage. Described tools: dump and restore : for backing up the ext2, ext3, and ext4 file systems. tar and cpio : for archiving or restoring files and folders, especially when backing up the tape drives. rsync : for performing back-up operations and synchronizing files and directories between locations. dd : for copying files from a source to a destination block by block independently of the file systems or operating systems involved. Described scenarios: Disaster recovery Hardware migration Partition table backup Important folder backup Incremental backup Differential backup
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/appe-customer-portal-labs
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, you can highlight the text in a document and add comments. Follow the steps in the procedure to learn about submitting feedback on Red Hat documentation. Prerequisites Log in to the Red Hat Customer Portal. In the Red Hat Customer Portal, view the document in HTML format. Procedure Click the Feedback button to see existing reader comments. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. In the prompt menu that opens near the text you selected, click Add Feedback . A text box opens in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . You have created a documentation issue. To view the issue, click the issue tracker link in the feedback view.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/proc-providing-feedback-on-redhat-documentation_cryostat
|
Part VI. System Monitoring
|
Part VI. System Monitoring System administrators also monitor system performance. Red Hat Enterprise Linux contains tools to assist administrators with these tasks.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/system_monitoring
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/proc_providing-feedback-on-red-hat-documentation_rhel-installer
|
Red Hat Data Grid
|
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/migrating_to_data_grid_8/red-hat-data-grid
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/kafka_configuration_tuning/making-open-source-more-inclusive
|
function::task_utime
|
function::task_utime Name function::task_utime - User time of the current task Synopsis Arguments None Description Returns the user time of the current task in cputime. Does not include any time used by other tasks in this process, nor does it include any time of the children of this task.
|
[
"task_utime:long()"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-utime
|
Vulnerability reporting with Clair on Red Hat Quay
|
Vulnerability reporting with Clair on Red Hat Quay Red Hat Quay 3.13 Vulnerability reporting with Clair on Red Hat Quay Red Hat OpenShift Documentation Team
|
[
"updaters: config: rhel: ignore_unpatched: false",
"auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer'",
"# updaters: sets: - alpine - aws - osv #",
"# updaters: sets: - alpine #",
"# updaters: sets: - aws #",
"# updaters: sets: - debian #",
"# updaters: sets: - clair.cvss #",
"# updaters: sets: - oracle #",
"# updaters: sets: - photon #",
"# updaters: sets: - suse #",
"# updaters: sets: - ubuntu #",
"# updaters: sets: - osv #",
"# updaters: sets: - rhel - rhcc - clair.cvss - osv #",
"# updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #",
"# updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #",
"# updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #",
"# updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #",
"# updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #",
"# updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #",
"# updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #",
"# updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #",
"# updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #",
"# updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #",
"# matcher: disable_updaters: true #",
"--- FEATURE_FIPS = true ---",
"mkdir /home/<user-name>/quay-poc/postgres-clairv4",
"setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4",
"sudo podman run -d --name postgresql-clairv4 -e POSTGRESQL_USER=clairuser -e POSTGRESQL_PASSWORD=clairpass -e POSTGRESQL_DATABASE=clair -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5433:5432 -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-15",
"sudo podman exec -it postgresql-clairv4 /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS \\\"uuid-ossp\\\"\" | psql -d clair -U postgres'",
"CREATE EXTENSION",
"sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret",
"tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/",
"mkdir /etc/opt/clairv4/config/",
"cd /etc/opt/clairv4/config/",
"http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: \"MTU5YzA4Y2ZkNzJoMQ==\" iss: [\"quay\"] tracing and metrics trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\" metrics: name: \"prometheus\"",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3",
"sudo podman stop <quay_container_name>",
"sudo podman stop <clair_container_id>",
"sudo podman run -d --name <clair_migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \\ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword -v </host/data/directory:/var/lib/pgsql/data:Z> \\ 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15",
"mkdir -p /host/data/clair-postgresql15-directory",
"setfacl -m u:26:-wx /host/data/clair-postgresql15-directory",
"sudo podman stop <clair_postgresql13_container_name>",
"sudo podman run -d --rm --name <postgresql15-clairv4> -e POSTGRESQL_USER=<clair_username> -e POSTGRESQL_PASSWORD=<clair_password> -e POSTGRESQL_DATABASE=<clair_database_name> -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> -p 5433:5432 -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> registry.redhat.io/rhel8/postgresql-15",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<quay_user>/quay-poc/config:/conf/stack:Z -v /home/<quay_user>/quay-poc/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo registry.redhat.io/quay/clair-rhel8:{productminv}",
"podman stop <clairv4_container_name>",
"podman pull quay.io/projectquay/clair:nightly-2024-02-03",
"podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z quay.io/projectquay/clair:nightly-2024-02-03",
"podman pull ubuntu:20.04",
"sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false",
"oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret",
"indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true",
"apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true",
"oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret",
"indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true",
"apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s",
"oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl",
"chmod u+x ./clairctl",
"oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"sudo podman cp clairv4:/usr/bin/clairctl ./clairctl",
"chmod u+x ./clairctl",
"mkdir /etc/clairv4/config/",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json",
"clair -conf ./path/to/config.yaml -mode indexer",
"clair -conf ./path/to/config.yaml -mode matcher",
"export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>",
"export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>",
"export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>",
"export NO_PROXY=<comma_separated_list_of_hosts_and_domains>",
"http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"",
"http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info",
"indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true",
"matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2",
"matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"",
"updaters: sets: - rhel config: rhel: ignore_unpatched: false",
"notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null",
"notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"",
"notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"",
"notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"",
"auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]",
"trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"",
"metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\""
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html-single/vulnerability_reporting_with_clair_on_red_hat_quay/OSV.dev
|
Chapter 5. Manually upgrading the kernel
|
Chapter 5. Manually upgrading the kernel The Red Hat Enterprise Linux kernel is custom-built by the Red Hat Enterprise Linux kernel team to ensure its integrity and compatibility with supported hardware. Before Red Hat releases a kernel, it must first pass a rigorous set of quality assurance tests. Red Hat Enterprise Linux kernels are packaged in the RPM format so that they are easy to upgrade and verify using the Yum or PackageKit package managers. PackageKit automatically queries the Red Hat Content Delivery Network servers and informs you of packages with available updates, including kernel packages. This chapter is therefore only useful for users who need to manually update a kernel package using the rpm command instead of yum . Warning Whenever possible, use either the Yum or PackageKit package manager to install a new kernel because they always install a new kernel instead of replacing the current one, which could potentially leave your system unable to boot. Warning Custom kernels are not supported by Red Hat. However, guidance can be obtained from the solution article . For more information on installing kernel packages with yum , see the relevant section in the System Administrator's Guide . For information on Red Hat Content Delivery Network, see the relevant section in the System Administrator's Guide . 5.1. Overview of kernel packages Red Hat Enterprise Linux contains the following kernel packages: kernel - Contains the kernel for single-core, multi-core, and multi-processor systems. kernel-debug - Contains a kernel with numerous debugging options enabled for kernel diagnosis, at the expense of reduced performance. kernel-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel package. kernel-debug-devel - Contains the development version of the kernel with numerous debugging options enabled for kernel diagnosis, at the expense of reduced performance. kernel-doc - Documentation files from the kernel source. Various portions of the Linux kernel and the device drivers shipped with it are documented in these files. Installation of this package provides a reference to the options that can be passed to Linux kernel modules at load time. By default, these files are placed in the /usr/share/doc/kernel-doc- kernel_version / directory. kernel-headers - Includes the C header files that specify the interface between the Linux kernel and user-space libraries and programs. The header files define structures and constants that are needed for building most standard programs. linux-firmware - Contains all of the firmware files that are required by various devices to operate. perf - This package contains the perf tool, which enables performance monitoring of the Linux kernel. kernel-abi-whitelists - Contains information pertaining to the Red Hat Enterprise Linux kernel ABI, including a lists of kernel symbols that are needed by external Linux kernel modules and a yum plug-in to aid enforcement. kernel-tools - Contains tools for manipulating the Linux kernel and supporting documentation. 5.2. Preparing to upgrade Before upgrading the kernel, it is recommended that you take some precautionary steps. First, ensure that working boot media exists for the system in case a problem occurs. If the boot loader is not configured properly to boot the new kernel, you can use this media to boot into Red Hat Enterprise Linux USB media often comes in the form of flash devices sometimes called pen drives , thumb disks , or keys , or as an externally-connected hard disk device. Almost all media of this type is formatted as a VFAT file system. You can create bootable USB media on media formatted as ext2 , ext3 , ext4 , or VFAT . You can transfer a distribution image file or a minimal boot media image file to USB media. Make sure that sufficient free space is available on the device. Around 4 GB is required for a distribution DVD image, around 700 MB for a distribution CD image, or around 10 MB for a minimal boot media image. You must have a copy of the boot.iso file from a Red Hat Enterprise Linux installation DVD, or installation CD-ROM #1, and you need a USB storage device formatted with the VFAT file system and around 16 MB of free space. For more information on using USB storage devices, review How to format a USB key and How to manually mount a USB flash drive in a non-graphical environment solution articles. The following procedure does not affect existing files on the USB storage device unless they have the same path names as the files that you copy onto it. To create USB boot media, perform the following commands as the root user: Install the syslinux package if it is not installed on your system. To do so, as root, run the yum install syslinux command. Install the SYSLINUX bootloader on the USB storage device: ... where sdX is the device name. Create mount points for boot.iso and the USB storage device: Mount boot.iso : Mount the USB storage device: Copy the ISOLINUX files from the boot.iso to the USB storage device: Use the isolinux.cfg file from boot.iso as the syslinux.cfg file for the USB device: Unmount boot.iso and the USB storage device: Reboot the machine with the boot media and verify that you are able to boot with it before continuing. Alternatively, on systems with a floppy drive, you can create a boot diskette by installing the mkbootdisk package and running the mkbootdisk command as root . See man mkbootdisk man page after installing the package for usage information. To determine which kernel packages are installed, execute the command yum list installed "kernel-*" at a shell prompt. The output comprises some or all of the following packages, depending on the system's architecture, and the version numbers might differ: From the output, determine which packages need to be downloaded for the kernel upgrade. For a single processor system, the only required package is the kernel package. See Section 5.1, "Overview of kernel packages" for descriptions of the different packages. 5.3. Downloading the upgraded kernel There are several ways to determine if an updated kernel is available for the system. Security Errata - See Security Advisories in Red Hat Customer Portal for information on security errata, including kernel upgrades that fix security issues. The Red Hat Content Delivery Network - For a system subscribed to the Red Hat Content Delivery Network, the yum package manager can download the latest kernel and upgrade the kernel on the system. The Dracut utility creates an initial RAM file system image if needed, and configure the boot loader to boot the new kernel. For more information on installing packages from the Red Hat Content Delivery Network, see the relevant section of the System Administrator's Guide . For more information on subscribing a system to the Red Hat Content Delivery Network, see the relevant section of the System Administrator's Guide . If yum was used to download and install the updated kernel from the Red Hat Network, follow the instructions in Section 5.5, "Verifying the initial RAM file system image" and Section 5.6, "Verifying the boot loader" only, do not change the kernel to boot by default. Red Hat Network automatically changes the default kernel to the latest version. To install the kernel manually, continue to Section 5.4, "Performing the upgrade" . 5.4. Performing the upgrade After retrieving all of the necessary packages, it is time to upgrade the existing kernel. Important It is strongly recommended that you keep the old kernel in case there are problems with the new kernel. At a shell prompt, change to the directory that contains the kernel RPM packages. Use -i argument with the rpm command to keep the old kernel. Do not use the -U option, since it overwrites the currently installed kernel, which creates boot loader problems. For example: The step is to verify that the initial RAM file system image has been created. See Section 5.5, "Verifying the initial RAM file system image" for details. 5.5. Verifying the initial RAM file system image The job of the initial RAM file system image is to preload the block device modules, such as for IDE, SCSI or RAID, so that the root file system, on which those modules normally reside, can then be accessed and mounted. On Red Hat Enterprise Linux 7 systems, whenever a new kernel is installed using either the Yum , PackageKit , or RPM package manager, the Dracut utility is always called by the installation scripts to create an initramfs (initial RAM file system image). If you make changes to the kernel attributes by modifying the /etc/sysctl.conf file or another sysctl configuration file, and if the changed settings are used early in the boot process, then rebuilding the Initial RAM File System Image by running the dracut -f command might be necessary. An example is if you have made changes related to networking and are booting from network-attached storage. On all architectures other than IBM eServer System i (see the section called "Verifying the initial RAM file system image and kernel on IBM eServer System i" ), you can create an initramfs by running the dracut command. However, you usually do not need to create an initramfs manually: this step is automatically performed if the kernel and its associated packages are installed or upgraded from RPM packages distributed by Red Hat. You can verify that an initramfs corresponding to your current kernel version exists and is specified correctly in the grub.cfg configuration file by following this procedure: Verifying the initial RAM file system image As root , list the contents in the /boot directory and find the kernel ( vmlinuz- kernel_version ) and initramfs- kernel_version with the latest (most recent) version number: Example 5.1. Ensuring that the kernel and initramfs versions match Example 5.1, "Ensuring that the kernel and initramfs versions match" shows that: we have three kernels installed (or, more correctly, three kernel files are present in the /boot directory), the latest kernel is vmlinuz-3.10.0-78.el7.x86_64 , and an initramfs file matching our kernel version, initramfs-3.10.0-78.el7.x86_64kdump.img , also exists. Important In the /boot directory you might find several initramfs- kernel_version kdump.img files. These are special files created by the Kdump mechanism for kernel debugging purposes, are not used to boot the system, and can safely be ignored. For more information on kdump , see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide . If your initramfs- kernel_version file does not match the version of the latest kernel in the /boot directory, or, in certain other situations, you might need to generate an initramfs file with the Dracut utility. Simply invoking dracut as root without options causes it to generate an initramfs file in /boot for the latest kernel present in that directory: You must use the -f , --force option if you want dracut to overwrite an existing initramfs (for example, if your initramfs has become corrupt). Otherwise dracut refuses to overwrite the existing initramfs file: You can create an initramfs in the current directory by calling dracut initramfs_name kernel_version : If you need to specify specific kernel modules to be preloaded, add the names of those modules (minus any file name suffixes such as .ko ) inside the parentheses of the add_dracutmodules+=" module more_modules " directive of the /etc/dracut.conf configuration file. You can list the file contents of an initramfs image file created by dracut by using the lsinitrd initramfs_file command: See man dracut and man dracut.conf for more information on options and usage. Examine the /boot/grub2/grub.cfg configuration file to ensure that an initramfs- kernel_version .img file exists for the kernel version you are booting. For example: See Section 5.6, "Verifying the boot loader" for more information. Verifying the initial RAM file system image and kernel on IBM eServer System i On IBM eServer System i machines, the initial RAM file system and kernel files are combined into a single file, which is created with the addRamDisk command. This step is performed automatically if the kernel and its associated packages are installed or upgraded from the RPM packages distributed by Red Hat thus, it does not need to be executed manually. To verify that it was created, run the following command as root to make sure the /boot/vmlinitrd- kernel_version file already exists: The kernel_version needs to match the version of the kernel just installed. Reversing the changes made to the initial RAM file system image In some cases, for example, if you misconfigure the system and it no longer boots, you need to reverse the changes made to the Initial RAM File System Image by following this procedure: Reversing Changes Made to the Initial RAM File System Image Reboot the system choosing the rescue kernel in the GRUB menu. Change the incorrect setting that caused the initramfs to malfunction. Recreate the initramfs with the correct settings by running the following command as root: The above procedure might be useful if, for example, you incorrectly set the vm.nr_hugepages in the sysctl.conf file. Because the sysctl.conf file is included in initramfs , the new vm.nr_hugepages setting gets applied in initramfs and causes rebuilding of the initramfs . However, because the setting is incorrect, the new initramfs is broken and the newly built kernel does not boot, which necessitates correcting the setting using the above procedure. Listing the contents of the initial RAM file system image To list the files that are included in the initramfs , run the following command as root: To only list files in the /etc directory, use the following command: To output the contents of a specific file stored in the initramfs for the current kernel, use the -f option: For example, to output the contents of sysctl.conf , use the following command: To specify a kernel version, use the --kver option: For example, to list the information about kernel version 3.10.0-327.10.1.el7.x86_64, use the following command: 5.6. Verifying the boot loader You can install a kernel either with the yum command or with the rpm command. When you install a kernel using rpm , the kernel package creates an entry in the boot loader configuration file for that new kernel. Note that both commands configure the new kernel to boot as the default kernel only when you include the following setting in the /etc/sysconfig/kernel configuration file: The DEFAULTKERNEL option specifies the default kernel package type. The UPDATEDEFAULT option specifies whether the new kernel package makes the new kernels the default.
|
[
"# syslinux /dev/sdX1",
"# mkdir /mnt/isoboot /mnt/diskboot",
"# mount -o loop boot.iso /mnt/isoboot",
"# mount /dev/sdX1 /mnt/diskboot",
"# cp /mnt/isoboot/isolinux/* /mnt/diskboot",
"# grep -v local /mnt/isoboot/isolinux/isolinux.cfg > /mnt/diskboot/syslinux.cfg",
"# umount /mnt/isoboot /mnt/diskboot",
"# yum list installed \"kernel-*\" kernel.x86_64 3.10.0-54.0.1.el7 @rhel7/7.0 kernel-devel.x86_64 3.10.0-54.0.1.el7 @rhel7 kernel-headers.x86_64 3.10.0-54.0.1.el7 @rhel7/7.0",
"# rpm -ivh kernel-kernel_version.arch.rpm",
"# ls /boot config-3.10.0-67.el7.x86_64 config-3.10.0-78.el7.x86_64 efi grub grub2 initramfs-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c.img initramfs-3.10.0-67.el7.x86_64.img initramfs-3.10.0-67.el7.x86_64kdump.img initramfs-3.10.0-78.el7.x86_64.img initramfs-3.10.0-78.el7.x86_64kdump.img initrd-plymouth.img symvers-3.10.0-67.el7.x86_64.gz symvers-3.10.0-78.el7.x86_64.gz System.map-3.10.0-67.el7.x86_64 System.map-3.10.0-78.el7.x86_64 vmlinuz-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c vmlinuz-3.10.0-67.el7.x86_64 vmlinuz-3.10.0-78.el7.x86_64",
"# dracut",
"# dracut Does not override existing initramfs (/boot/initramfs-3.10.0-78.el7.x86_64.img) without --force",
"# dracut \"initramfs-USD(uname -r).img\" USD(uname -r)",
"# lsinitrd /boot/initramfs-3.10.0-78.el7.x86_64.img Image: /boot/initramfs-3.10.0-78.el7.x86_64.img: 11M ======================================================================== dracut-033-68.el7 ======================================================================== drwxr-xr-x 12 root root 0 Feb 5 06:35 . drwxr-xr-x 2 root root 0 Feb 5 06:35 proc lrwxrwxrwx 1 root root 24 Feb 5 06:35 init -> /usr/lib/systemd/systemd drwxr-xr-x 10 root root 0 Feb 5 06:35 etc drwxr-xr-x 2 root root 0 Feb 5 06:35 usr/lib/modprobe.d [output truncated]",
"# grep initramfs /boot/grub2/grub.cfg initrd16 /initramfs-3.10.0-123.el7.x86_64.img initrd16 /initramfs-0-rescue-6d547dbfd01c46f6a4c1baa8c4743f57.img",
"# ls -l /boot/",
"# dracut --kver kernel_version --force",
"# lsinitrd",
"# lsinitrd | grep etc/",
"# lsinitrd -f filename",
"# lsinitrd -f /etc/sysctl.conf",
"# lsinitrd --kver kernel_version -f /etc/sysctl.conf",
"# lsinitrd --kver 3.10.0-327.10.1.el7.x86_64 -f /etc/sysctl.conf",
"DEFAULTKERNEL=kernel UPDATEDEFAULT=yes"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/kernel_administration_guide/ch-manually_upgrading_the_kernel
|
B.78.2. RHBA-2011:0012 - qemu-kvm bug fix update
|
B.78.2. RHBA-2011:0012 - qemu-kvm bug fix update Updated qemu-kvm packages that fix various bugs are now available for Red Hat Enterprise Linux 6. KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on AMD64 and Intel 64 systems. qemu-kvm is the user-space component for running virtual machines using KVM. Bug Fixes BZ# 648821 When running a Windows Server 2008 virtual machine with a virtio network interface controller (NIC), unplugging the NIC could cause qemu-kvm to terminate unexpectedly with a segmentation fault. With this update, the underlying source code has been modified to address this issue, and unplugging such NIC while the virtual machine is active no longer causes qemu-kvm to crash. BZ# 653329 Previously, qemu-kvm did not allow a user to select a resolution higher than 1920x1080, which may have been rather limiting. This update increases the maximum supported resolution to 2560x1600. BZ# 653337 Due to an error in the Russian keyboard layout, pressing the "/" and "|" keys with the "ru" layout enabled produced wrong characters. With this update, the relevant lines in the ru.orig file have been corrected, and pressing these keys now produces the expected results. BZ# 653341 Under certain circumstances, QEMU could stop responding during the installation of an operating system in a virtual machine when the QXL display device was in use. This error no longer occurs, and kvm-qemu now works as expected. BZ# 653343 When running a virtual machine with 4 or more gigabytes of the virtual memory, an attempt to hot plug a network interface controller (NIC) failed with the following error message: Device '[device_name]' could not be initialized This update resolves this issue, and hot-plugging a NIC in a virtual machine with 4 or more gigabytes of the virtual memory no longer fails. BZ# 662058 Previously, the conversion of a disk image by using the "qemu-img convert" command may have been significantly slow. With this update, various patches have been applied to improve the performance of the above command. All users of qemu-kvm are advised to upgrade to these updated packages, which resolve these issues.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2011-0012
|
Managing hybrid and multicloud resources
|
Managing hybrid and multicloud resources Red Hat OpenShift Data Foundation 4.17 Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa). Red Hat Storage Documentation Team Abstract This document explains how to manage storage resources across a hybrid cloud or multicloud environment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. Chapter 2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint . Prerequisites A running OpenShift Data Foundation Platform. 2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint Note The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour. 2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You have the relevant endpoint, access key, and secret access key in order to connect to your applications. For example: If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: 2.3. Support of Multicloud Object Gateway data bucket APIs The following table lists the Multicloud Object Gateway (MCG) data bucket APIs and their support levels. Data buckets Support List buckets Supported Delete bucket Supported Replication configuration is part of MCG bucket class configuration Create bucket Supported A different set of canned ACLs Post bucket Not supported Put bucket Partially supported Replication configuration is part of MCG bucket class configuration Bucket lifecycle Partially supported Object expiration only Policy (Buckets, Objects) Partially supported Bucket policies are supported Bucket Website Supported Bucket ACLs (Get, Put) Supported A different set of canned ACLs Bucket Location Partialy Returns a default value only Bucket Notification Not supported Bucket Object Versions Supoorted Get Bucket Info (HEAD) Supported Bucket Request Payment Partially supported Returns the bucket owner Put Object Supported Delete Object Supported Get Object Supported Object ACLs (Get, Put) Supported Get Object Info (HEAD) Supported POST Object Supported Copy Object Supported Multipart Uploads Supported Object Tagging Supported Storage Class Not supported Note No support for cors, metrics, inventory, analytics, inventory, logging, notifications, accelerate, replication, request payment, locks verbs Chapter 3. Adding storage resources for hybrid or Multicloud 3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 3.3, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab to view all the backing stores. 3.2. Overriding the default backing store You can use the manualDefaultBackingStore flag to override the default NooBaa backing store and remove it if you do not want to use the default backing store configuration. This provides flexibility to customize your backing store configuration and tailor it to your specific needs. By leveraging this feature, you can further optimize your system and enhance its performance. Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Check if noobaa-default-backing-store is present: Patch the NooBaa CR to enable manualDefaultBackingStore : Important Use the Multicloud Object Gateway CLI to create a new backing store and update accounts. Create a new default backing store to override the default backing store. For example: Replace NEW-DEFAULT-BACKING-STORE with the name you want for your new default backing store. Update the admin account to use the new default backing store as its default resource: Replace NEW-DEFAULT-BACKING-STORE with the name of the backing store from the step. Updating the default resource for admin accounts ensures that the new configuration is used throughout your system. Configure the default-bucketclass to use the new default backingstore: Optional: Delete the noobaa-default-backing-store. Delete all instances of and buckets associated with noobaa-default-backing-store and update the accounts using it as resource. Delete the noobaa-default-backing-store: You must enable the manualDefaultBackingStore flag before proceeding. Additionally, it is crucial to update all accounts that use the default resource and delete all instances of and buckets associated with the default backing store to ensure a smooth transition. 3.3. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters. Add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 3.3.1, "Creating an AWS-backed backingstore" For creating an AWS-STS-backed backingstore, see Section 3.3.2, "Creating an AWS-STS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 3.3.3, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 3.3.4, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 3.3.5, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 3.3.6, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 3.4, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 3.3.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 3.3.2. Creating an AWS-STS-backed backingstore Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following: Creating an AWS role using a script, which helps to get the temporary security credentials for the role session Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Creating backingstore in AWS STS OpenShift cluster 3.3.2.1. Creating an AWS role using a script You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator. Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Procedure Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation. The following example shows the details that are required to create the role: where 123456789123 Is the AWS account ID mybucket Is the bucket name (using public bucket configuration) us-east-2 Is the AWS region openshift-storage Is the namespace name Sample script 3.3.2.2. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Procedure Install OpenShift Data Foundation Operator from the Operator Hub. During the installation add the role ARN in the ARN Details field. Make sure that the Update approval field is set to Manual . 3.3.2.3. Creating a new AWS STS backingstore Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster . Procedure Install Multicloud Object Gateway (MCG). It is installed with the default backingstore by using the short-lived credentials. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command: where backingstore-name Name of the backingstore aws-sts-role-arn The AWS STS role ARN which will assume role region The AWS bucket region target-bucket The target bucket name on the cloud 3.3.3. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.4. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 3.3.5. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.6. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 3.4. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab and search the new Bucket Class. 3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save . Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage -> Object Storage -> Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage -> Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . allowed_buckets A comma separated list of bucket names to which the user is allowed to have access and management rights. default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). full_permission Indicates whether the account should be allowed full permission or not. Supported values are true or false . Default value is false . new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace. Chapter 5. Securing Multicloud Object Gateway 5.1. Changing the default account credentials to ensure better security in the Multicloud Object Gateway Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security. 5.1.1. Resetting the noobaa account password Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To reset the noobaa account password, run the following command: Example: Example output: Important To access the admin account credentials run the noobaa status command from the terminal: 5.1.2. Setting Multicloud Object Gateway account credentials using CLI command You can update and verify the Multicloud Object Gateway (MCG) account credentials manually by using the MCG CLI command. Prerequisites Ensure that the following prerequisites are met: A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To update the MCG account credentials, run the following command: Example: Example output: Credential complexity requirements: Access key The account access key must be 20 characters in length and it must contain only alphanumeric characters. Secret key The secret key must be 40 characters in length and it must contain alphanumeric characters and "+", "/". For example: To verify the credentials, run the following command: Note You cannot have a duplicate access-key. Each user must have a unique access-key and secret-key . 5.1.3. Regenerating the S3 credentials for the accounts Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Get the account name. For listing the accounts, run the following command: Example output: Alternatively, run the oc get noobaaaccount command from the terminal: Example output: To regenerate the noobaa account S3 credentials, run the following command: Once you run the noobaa account regenerate command it will prompt a warning that says "This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.1.4. Regenerating the S3 credentials for the OBC Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To get the OBC name, run the following command: Example output: Alternatively, run the oc get obc command from the terminal: Example output: To regenerate the noobaa OBC S3 credentials, run the following command: Once you run the noobaa obc regenerate command it will prompt a warning that says "This will invalidate all connections between the S3 clients and noobaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.2. Enabling secured mode deployment for Multicloud Object Gateway You can specify a range of IP addresses that should be allowed to reach the Multicloud Object Gateway (MCG) load balancer services to enable secure mode deployment. This helps to control the IP addresses that can access the MCG services. Note You can disable the MCG load balancer usage by setting the disableLoadBalancerService variable in the storagecluster custom resource definition (CRD) while deploying OpenShift Data Foundation using the command line interface. This helps to restrict MCG from creating any public resources for private clusters and to disable the MCG service EXTERNAL-IP . For more information, see the Red Hat Knowledgebase article Install Red Hat OpenShift Data Foundation 4.X in internal mode using command line interface . For information about disabling MCG load balancer service after deploying OpenShift Data Foundation, see Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation . Prerequisites A running OpenShift Data Foundation cluster. In case of a bare metal deployment, ensure that the load balancer controller supports setting the loadBalancerSourceRanges attribute in the Kubernetes services. Procedure Edit the NooBaa custom resource (CR) to specify the range of IP addresses that can access the MCG services after deploying OpenShift Data Foundation. noobaa The NooBaa CR type that controls the NooBaa system deployment. noobaa The name of the NooBaa CR. For example: loadBalancerSourceSubnets A new field that can be added under spec in the NooBaa CR to specify the IP addresses that should have access to the NooBaa services. In this example, all the IP addresses that are in the subnet 10.0.0.0/16 or 192.168.10.0/32 will be able to access MCG S3 and security token service (STS) while the other IP addresses are not allowed to access. Verification steps To verify if the specified IP addresses are set, in the OpenShift Web Console, run the following command and check if the output matches with the IP addresses provided to MCG: Chapter 6. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . OpenShift Data Foundation version 4.17 introduces the bucket policy elements NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. Chapter 8. Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (AWS S3, Azure, and so on). A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication. Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC). You must define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: 8.1.2. Replicating a bucket to another bucket using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC) or you can edit the YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class. You must define the replication-policy parameter in a JSON file. You can set a bucket class replication policy for the Placement and Namespace bucket classes. You can set a bucket class replication policy for the Placement and Namespace bucket classes. Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. You can pass many backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . For example: This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class or you can edit their YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. You can pass many backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . 8.3. Enabling log based bucket replication When creating a bucket replication policy, you can use logs so that recent data is replicated more quickly, while the default scan-based replication works on replicating the rest of the data. Important This feature requires setting up bucket logs on AWS or Azure.For more information about setting up AWS logs, see Enabling Amazon S3 server access logging . The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket. Note This feature is only supported in buckets that are backed by a NamespaceStore. Buckets backed by BackingStores cannot utilized log-based replication. 8.3.1. Enabling log based bucket replication for new namespace buckets using OpenShift Web Console in Amazon Web Service environment You can optimize replication by using the event logs of the Amazon Web Service(AWS) cloud environment. You enable log based bucket replication for new namespace buckets using the web console during the creation of namespace buckets. Prerequisites Ensure that object logging is enabled in AWS. For more information, see the "Using the S3 console" section in Enabling Amazon S3 server access logging . Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Object Bucket Claims . Click Create ObjectBucketClaim . Enter the name of ObjectBucketName and select StorageClass and BucketClass. Select the Enable replication check box to enable replication. In the Replication policy section, select the Optimize replication using event logs checkbox. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. 8.3.2. Enabling log based bucket replication for existing namespace buckets using YAML You can enable log based bucket replication for the existing buckets that are created using the command line interface or by applying an YAML, and not the buckets that are created using AWS S3 commands. Procedure Edit the YAML of the bucket's OBC to enable log based bucket replication. Add the following under spec : Note It is also possible to add this to the YAML of an OBC before it is created. rule_id Specify an ID of your choice for identifying the rule destination_bucket Specify the name of the target MCG bucket that the objects are copied to (optional) {"filter": {"prefix": <>}} Specify a prefix string that you can set to filter the objects that are replicated log_replication_info Specify an object that contains data related to log-based replication optimization. {"logs_location": {"logs_bucket": <>}} is set to the location of the AWS S3 server access logs. 8.3.3. Enabling log based bucket replication in Microsoft Azure Prerequisites Refer to Microsoft Azure documentation and ensure that you have completed the following tasks in the Microsoft Azure portal: Ensure that have created a new application and noted down the name, application (client) ID, and directory (tenant) ID. For information, see Register an application . Ensure that a new a new client secret is created and the application secret is noted down. Ensure that a new Log Analytics workspace is created and its name and workspace ID is noted down. For information, see Create a Log Analytics workspace . Ensure that the Reader role is assigned under Access control and members are selected and the name of the application that you registered in the step is provided. For more information, see Assign Azure roles using the Azure portal . Ensure that a new storage account is created and the Access keys are noted down. In the Monitoring section of the storage account created, select a blob and in the Diagnostic settings screen, select only StorageWrite and StorageDelete , and in the destination details add the Log Analytics workspace that you created earlier. Ensure that a blob is selected in the Diagnostic settings screen of the Monitoring section of the storage account created. Also, ensure that only StorageWrite and StorageDelete is selected and in the destination details, the Log Analytics workspace that you created earlier is added. For more information, see Diagnostic settings in Azure Monitor . Ensure that two new containers for object source and object destination are created. Administrator access to OpenShift Web Console. Procedure Create a secret with credentials to be used by the namespacestores . Create a NamespaceStore backed by a container created in Azure. For more information, see Adding a namespace bucket using the OpenShift Container Platform user interface . Create a new Namespace-Bucketclass and OBC that utilizes it. Check the object bucket name by looking in the YAML of target OBC, or by listing all S3 buckets, for example, - s3 ls . Use the following template to apply an Azure replication policy on your source OBC by adding the following in its YAML, under .spec : sync_deletion Specify a boolean value, true or false . destination_bucket Make sure to use the name of the object bucket, and not the claim. The name can be retrieved using the s3 ls command, or by looking for the value in an OBC's YAML. Verification steps Write objects to the source bucket. Wait until MCG replicates them. Delete the objects from the source bucket. Verify the objects were removed from the target bucket. 8.3.4. Enabling log-based bucket replication deletion Prerequisites Administrator access to OpenShift Web Console. AWS Server Access Logging configured for the desired bucket. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Object Bucket Claims . Click Create new Object bucket claim . (Optional) In the Replication rules section, select the Sync deletion checkbox for each rule separately. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. 8.4. Bucket logging for Multicloud Object Gateway Bucket logging helps you to record the S3 operations that are performed against the Multicloud Object Gateway (MCG) bucket for compliance, auditing, and optimization purposes. Bucket logging supports the following two options: Best-effort - Bucket logging is recorded using UDP on the best effort basis Guaranteed - Bucket logging with this option creates a PVC attached to the MCG pods and saves the logs to this PVC on a Guaranteed basis, and then from the PVC to the log buckets. Using this option logging takes place twice for every S3 operation as follows: At the start of processing the request At the end with the result of the S3 operation 8.4.1. Enabling bucket logging for Multicloud Object Gateway using the Best-effort option Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to MCG. For information, see Accessing the Multicloud Object Gateway with your applications . Procedure Create a data bucket where you can upload the objects. Create a log bucket where you want to store the logs for bucket operations by using the following command: Configure bucket logging on data bucket with log bucket in one of the following ways: Using the NooBaa API Using the S3 API Create a file called setlogging.json in the following format: Run the following command: Verify if the bucket logging is set for the data bucket in one of the following ways: Using the NooBaa API Using the S3 API The S3 operations can take up to 24 hours to get recorded in the logs bucket. The following example shows the recorded logs and how to download them: Example (Optional) To disable bucket logging, use the following command: 8.4.2. Enabling bucket logging using the Guaranteed option Procedure Enable Guaranteed bucket logging using the NooBaa CR in one of the following ways: Using the default CephFS storage class update the NooBaa CR spec: Using the RWX PVC that you created: Note Make sure that the PVC supports RWX Chapter 9. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 9.1, "Dynamic Object Bucket Claim" Section 9.2, "Creating an Object Bucket Claim using the command line interface" Section 9.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 9.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 9.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 9.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 9.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 9.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 9.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 9.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Chapter 10. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. AWS S3 IBM COS 10.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Chapter 11. Lifecyle bucket configuration in Multicloud Object Gateway Multicloud Object Gateway (MCG) lifecycle provides a way to reduce storage costs due to accumulated data objects. Deletion of expired objects is a simplified way that enables handling of unused data. Data expiration is a part of Amazon Web Services (AWS) lifecycle management and sets an expiration date for automatic deletion. The minimal time resolution of the lifecycle expiration is one day. For more information, see Expiring objects . AWS S3 API is used to configure lifecyle bucket in MCG. For information about the data bucket APIs and their support level, see Support of Multicloud Object Gateway data bucket APIs . There are a few limitations with the expiratation rule API for MCG in comaparison with AWS: ExpiredObjectDeleteMarker is accepted but it is not processed. No option to define specific non-current version's expiration conditions Chapter 12. Scaling Multicloud Object Gateway performance The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 12.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . 12.2. Increasing CPU and memory for PV pool resources MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, you can configure the required values for CPU and memory in the OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Backing Store . Select the relevant backing store and click on YAML. Scroll down until you find spec: and update pvPool with CPU and memory. Add a new property of limits and then add cpu and memory. Example reference: Click Save . Verification steps To verfiy, you can check the resource values of the PV pool pods. Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore . Chapter 14. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert . 14.1. Accessing External RGW server in OpenShift Data Foundation Accessing External RGW server using Object Bucket Claims The S3 credentials such as AccessKey or Secret Key is stored in the secret generated by the Object Bucket Claim (OBC) creation and you can fetch the same by using the following commands: Similarly, you can fetch the endpoint details from the configmap of OBC: Accessing External RGW server using the Ceph Object Store User CR You can fetch the S3 Credentials and endpoint details from the secret generated as part of the Ceph Object Store User CR: Important For both the access mechanisms, you can either request for new certificates from the administrator or reuse the certificates from the Kubernetes secret, ceph-rgw-tls-cert . Chapter 15. Using the Multicloud Object Gateway's Security Token Service to assume the role of another user Multicloud Object Gateway (MCG) provides support to a security token service (STS) similar to the one provided by Amazon Web Services. To allow other users to assume the role of a certain user, you need to assign a role configuration to the user. You can manage the configuration of roles using the MCG CLI tool. The following example shows role configuration that allows two MCG users ( [email protected] and [email protected] ) to assume a certain user's role: Assign the role configuration by using the MCG CLI tool. Collect the following information before proceeding to assume the role as it is needed for the subsequent steps: The access key ID and secret access key of the assumer (the user who assumes the role) The MCG STS endpoint, which can be retrieved by using the command: The access key ID of the assumed user. The value of the role_name value in your role configuration. A name of your choice for the role session After the configuration role is ready, assign it to the appropriate user (fill with the data described in the step) - Note Adding --no-verify-ssl might be necessary depending on your cluster's configuration. The resulting output contains the access key ID, secret access key, and session token that can be used for executing actions while assuming the other user's role. You can use the credentials generated after the assume role steps as shown in the following example:
|
[
"oc describe noobaa -n openshift-storage",
"Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443",
"noobaa status -n openshift-storage",
"INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.",
"AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls",
"oc get backingstore NAME TYPE PHASE AGE noobaa-default-backing-store pv-pool Creating 102s",
"oc patch noobaa/noobaa --type json --patch='[{\"op\":\"add\",\"path\":\"/spec/manualDefaultBackingStore\",\"value\":true}]'",
"noobaa backingstore create pv-pool _NEW-DEFAULT-BACKING-STORE_ --num-volumes 1 --pv-size-gb 16",
"noobaa account update [email protected] --new_default_resource=_NEW-DEFAULT-BACKING-STORE_",
"oc patch Bucketclass noobaa-default-bucket-class -n openshift-storage --type=json --patch='[{\"op\": \"replace\", \"path\": \"/spec/placementPolicy/tiers/0/backingStores/0\", \"value\": \"NEW-DEFAULT-BACKING-STORE\"}]'",
"oc delete backingstore noobaa-default-backing-store -n openshift-storage | oc patch -n openshift-storage backingstore/noobaa-default-backing-store --type json --patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]'",
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"mybucket-oidc.s3.us-east-2.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-storage:noobaa\", \"system:serviceaccount:openshift-storage:noobaa-core\", \"system:serviceaccount:openshift-storage:noobaa-endpoint\" ] } } } ] }",
"#!/bin/bash set -x This is a sample script to help you deploy MCG on AWS STS cluster. This script shows how to create role-policy and then create the role in AWS. For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html WARNING: This is a sample script. You need to adjust the variables based on your requirement. Variables : user variables - REPLACE these variables with your values: ROLE_NAME=\"<role-name>\" # role name that you pick in your AWS account NAMESPACE=\"<namespace>\" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. MCG variables SERVICE_ACCOUNT_NAME_1=\"noobaa\" # The service account name of deployment operator SERVICE_ACCOUNT_NAME_2=\"noobaa-endpoint\" # The service account name of deployment endpoint SERVICE_ACCOUNT_NAME_3=\"noobaa-core\" # The service account name of statefulset core AWS variables Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) If you want to create the role before using the cluster, replace this field too. The OIDC provider is in the structure: 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") the permission (S3 full access) POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_1}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_2}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_3}\" ] } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDROLE_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDROLE_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa account create <noobaa-account-name> [flags]",
"noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore",
"NooBaaAccount spec: allow_bucket_creation: true Allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>",
"noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s",
"oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001",
"oc get ns <application_namespace> -o yaml | grep scc",
"oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000",
"oc project <application_namespace>",
"oc project testnamespace",
"oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s",
"oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s",
"oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}",
"oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]",
"oc exec -it <pod_name> -- df <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"oc get pv | grep <pv_name>",
"oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s",
"oc get pv <pv_name> -o yaml",
"oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound",
"cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF",
"oc create -f <YAML_file>",
"oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created",
"oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s",
"oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".",
"noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'",
"noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'",
"oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace",
"noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'",
"noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'",
"oc exec -it <pod_name> -- mkdir <mount_path> /nsfs",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs",
"noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'",
"noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'",
"oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"noobaa bucket delete <bucket_name>",
"noobaa bucket delete legacy-bucket",
"noobaa account delete <user_account>",
"noobaa account delete leguser",
"noobaa namespacestore delete <nsfs_namespacestore>",
"noobaa namespacestore delete legacy-namespace",
"oc delete pv <cephfs_pv_name>",
"oc delete pvc <cephfs_pvc_name>",
"oc delete pv cephfs-pv-legacy-openshift-storage",
"oc delete pvc cephfs-pvc-legacy",
"oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"oc edit ns <appplication_namespace>",
"oc edit ns testnamespace",
"oc get ns <application_namespace> -o yaml | grep sa.scc.mcs",
"oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF",
"oc create -f scc.yaml",
"oc create serviceaccount <service_account_name>",
"oc create serviceaccount testnamespacesa",
"oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>",
"oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa",
"oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'",
"oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'",
"oc edit dc <pod_name> -n <application_namespace>",
"spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>",
"oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace",
"spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0",
"oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext",
"oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0",
"noobaa account passwd <noobaa_account_name> [options]",
"noobaa account passwd FATA[0000] ❌ Missing expected arguments: <noobaa_account_name> Options: --new-password='': New Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in t he shell history --old-password='': Old Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history --retype-new-password='': Retype new Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history Usage: noobaa account passwd <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa account passwd [email protected]",
"Enter old-password: [got 24 characters] Enter new-password: [got 7 characters] Enter retype-new-password: [got 7 characters] INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✅ Exists: NooBaa \"noobaa\" INFO[0017] ✅ Exists: Service \"noobaa-mgmt\" INFO[0017] ✅ Exists: Secret \"noobaa-operator\" INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✈\\ufe0f RPC: account.reset_password() Request: {Email:[email protected] VerificationPassword: * Password: *} WARN[0017] RPC: GetConnection creating connection to wss://localhost:58460/rpc/ 0xc000402ae0 INFO[0017] RPC: Connecting websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0017] RPC: Connected websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0020] ✅ RPC: account.reset_password() Response OK: took 2907.1ms INFO[0020] ✅ Updated: \"noobaa-admin\" INFO[0020] ✅ Successfully reset the password for the account \"[email protected]\"",
"-------------------- - Mgmt Credentials - -------------------- email : [email protected] password : ***",
"noobaa account credentials <noobaa-account-name> [options]",
"noobaa account credentials [email protected]",
"noobaa account credentials [email protected] Enter access-key: [got 20 characters] Enter secret-key: [got 40 characters] INFO[0026] ❌ Not Found: NooBaaAccount \"[email protected]\" INFO[0026] ✅ Exists: NooBaa \"noobaa\" INFO[0026] ✅ Exists: Service \"noobaa-mgmt\" INFO[0026] ✅ Exists: Secret \"noobaa-operator\" INFO[0026] ✅ Exists: Secret \"noobaa-admin\" INFO[0026] ✈\\ufe0f RPC: account.update_account_keys() Request: {Email:[email protected] AccessKeys:{AccessKey: * SecretKey: }} WARN[0026] RPC: GetConnection creating connection to wss://localhost:33495/rpc/ 0xc000cd9980 INFO[0026] RPC: Connecting websocket (0xc000cd9980) &{RPC:0xc0001655e0 Address:wss://localhost:33495/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0026] RPC: Connected websocket (0xc000cd9980) &{RPC:0xc0001655e0 Address:wss://localhost:33495/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0026] ✅ RPC: account.update_account_keys() Response OK: took 42.7ms INFO[0026] ✈\\ufe0f RPC: account.read_account() Request: {Email:[email protected]} INFO[0026] ✅ RPC: account.read_account() Response OK: took 2.0ms INFO[0026] ✅ Updated: \"noobaa-admin\" INFO[0026] ✅ Successfully updated s3 credentials for the account \"[email protected]\" INFO[0026] ✅ Exists: Secret \"noobaa-admin\" Connection info: AWS_ACCESS_KEY_ID : AWS_SECRET_ACCESS_KEY : *",
"noobaa account credentials my-account --access-key=ABCDEF1234567890ABCD --secret-key=ABCDE12345+FGHIJ67890/KLMNOPQRSTUV123456",
"noobaa account status <noobaa-account-name> --show-secrets",
"noobaa account list",
"NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE account-test [*] noobaa-default-backing-store Ready 14m17s test2 [first.bucket] noobaa-default-backing-store Ready 3m12s",
"oc get noobaaaccount",
"NAME PHASE AGE account-test Ready 15m test2 Ready 3m59s",
"noobaa account regenerate <noobaa_account_name> [options]",
"noobaa account regenerate FATA[0000] ❌ Missing expected arguments: <noobaa-account-name> Usage: noobaa account regenerate <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa account regenerate account-test",
"INFO[0000] You are about to regenerate an account's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n",
"INFO[0015] ✅ Exists: Secret \"noobaa-account-account-test\" Connection info: AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : ***",
"noobaa obc list",
"NAMESPACE NAME BUCKET-NAME STORAGE-CLASS BUCKET-CLASS PHASE default obc-test obc-test-35800e50-8978-461f-b7e0-7793080e26ba default.noobaa.io noobaa-default-bucket-class Bound",
"oc get obc",
"NAME STORAGE-CLASS PHASE AGE obc-test default.noobaa.io Bound 38s",
"noobaa obc regenerate <bucket_claim_name> [options]",
"noobaa obc regenerate FATA[0000] ❌ Missing expected arguments: <bucket-claim-name> Usage: noobaa obc regenerate <bucket-claim-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa obc regenerate obc-test",
"INFO[0000] You are about to regenerate an OBC's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n",
"INFO[0022] ✅ RPC: bucket.read_bucket() Response OK: took 95.4ms ObjectBucketClaim info: Phase : Bound ObjectBucketClaim : kubectl get -n default objectbucketclaim obc-test ConfigMap : kubectl get -n default configmap obc-test Secret : kubectl get -n default secret obc-test ObjectBucket : kubectl get objectbucket obc-default-obc-test StorageClass : kubectl get storageclass default.noobaa.io BucketClass : kubectl get -n default bucketclass noobaa-default-bucket-class Connection info: BUCKET_HOST : s3.default.svc BUCKET_NAME : obc-test-35800e50-8978-461f-b7e0-7793080e26ba BUCKET_PORT : 443 AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : *** Shell commands: AWS S3 Alias : alias s3='AWS_ACCESS_KEY_ID=*** AWS_SECRET_ACCESS_KEY =*** aws s3 --no-verify-ssl --endpoint-url ***' Bucket status: Name : obc-test-35800e50-8978-461f-b7e0-7793080e26ba Type : REGULAR Mode : OPTIMAL ResiliencyStatus : OPTIMAL QuotaStatus : QUOTA_NOT_SET Num Objects : 0 Data Size : 0.000 B Data Size Reduced : 0.000 B Data Space Avail : 13.261 GB Num Objects Avail : 9007199254740991",
"oc edit noobaa -n openshift-storage noobaa",
"spec: loadBalancerSourceSubnets: s3: [\"10.0.0.0/16\", \"192.168.10.0/32\"] sts: - \"10.0.0.0/16\" - \"192.168.10.0/32\"",
"oc get svc -n openshift-storage <s3 | sts> -o=go-template='{{ .spec.loadBalancerSourceRanges }}'",
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws",
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]",
"noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json",
"[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]",
"noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replicationPolicy: {\"rules\": [{ \"rule_id\": \"\", \"destination_bucket\": \"\", \"filter\": {\"prefix\": \"\"}}]}",
"noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json",
"[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]",
"noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]",
"replicationPolicy: '{\"rules\":[{\"rule_id\":\"<RULE ID>\", \"destination_bucket\":\"<DEST>\", \"filter\": {\"prefix\": \"<PREFIX>\"}}], \"log_replication_info\": {\"logs_location\": {\"logs_bucket\": \"<LOGS_BUCKET>\"}}}'",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: TenantID: <AZURE TENANT ID ENCODED IN BASE64> ApplicationID: <AZURE APPLICATIOM ID ENCODED IN BASE64> ApplicationSecret: <AZURE APPLICATION SECRET ENCODED IN BASE64> LogsAnalyticsWorkspaceID: <AZURE LOG ANALYTICS WORKSPACE ID ENCODED IN BASE64> AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"replicationPolicy:'{\"rules\":[ {\"rule_id\":\"ID goes here\", \"sync_deletions\": \"<true or false>\"\", \"destination_bucket\":object bucket name\"} ], \"log_replication_info\":{\"endpoint_type\":\"AZURE\"}}'",
"nb bucket create data.bucket",
"nb bucket create log.bucket",
"nb api bucket_api put_bucket_logging '{ \"name\": \"data.bucket\", \"log_bucket\": \"log.bucket\", \"log_prefix\": \"data-bucket-logs\" }'",
"alias s3api_alias='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3api'",
"{ \"LoggingEnabled\": { \"TargetBucket\": \"<log-bucket-name>\", \"TargetPrefix\": \"<prefix/empty-string>\" } }",
"s3api_alias put-bucket-logging --endpoint <ep> --bucket <source-bucket> --bucket-logging-status file://setlogging.json --no-verify-ssl",
"nb api bucket_api get_bucket_logging '{ \"name\": \"data.bucket\" }'",
"s3api_alias get-bucket-logging --no-verify-ssl --endpoint <ep> --bucket <source-bucket>",
"s3_alias cp s3://logs.bucket/data-bucket-logs/logs.bucket.bucket_data-bucket-logs_1719230150.log - | tail -n 2 Jun 24 14:00:02 10-XXX-X-XXX.sts.openshift-storage.svc.cluster.local {\"noobaa_bucket_logging\":\"true\",\"op\":\"GET\",\"bucket_owner\":\"[email protected]\",\"source_bucket\":\"data.bucket\",\"object_key\":\"/data.bucket?list-type=2&prefix=data-bucket-logs&delimiter=%2F&encoding-type=url\",\"log_bucket\":\"logs.bucket\",\"remote_ip\":\"100.XX.X.X\",\"request_uri\":\"/data.bucket?list-type=2&prefix=data-bucket-logs&delimiter=%2F&encoding-type=url\",\"request_id\":\"luv2XXXX-ctyg2k-12gs\"} Jun 24 14:00:06 10-XXX-X-XXX.s3.openshift-storage.svc.cluster.local {\"noobaa_bucket_logging\":\"true\",\"op\":\"PUT\",\"bucket_owner\":\"[email protected]\",\"source_bucket\":\"data.bucket\",\"object_key\":\"/data.bucket/B69EC83F-0177-44D8-A8D1-4A10C5A5AB0F.file\",\"log_bucket\":\"logs.bucket\",\"remote_ip\":\"100.XX.X.X\",\"request_uri\":\"/data.bucket/B69EC83F-0177-44D8-A8D1-4A10C5A5AB0F.file\",\"request_id\":\"luv2XXXX-9syea5-x5z\"}",
"nb api bucket_api delete_bucket_logging '{ \"name\": \"data.bucket\" }'",
"bucketLogging: { loggingType: guaranteed }",
"bucketLogging: { loggingType: guaranteed bucketLoggingPVC: <pvc-name> }",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name> -o yaml",
"oc get secret <obc_name> -o yaml",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'",
"spec: pvPool: resources: limits: cpu: 1000m memory: 4000Mi requests: cpu: 800m memory: 800Mi storage: 50Gi",
"oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d",
"oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d",
"oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode",
"oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_HOST}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_PORT}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_NAME}'",
"oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.AccessKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.SecretKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.Endpoint}' | base64 --decode",
"'{\"role_name\": \"AllowTwoAssumers\", \"assume_role_policy\": {\"version\": \"2012-10-17\", \"statement\": [ {\"action\": [\"sts:AssumeRole\"], \"effect\": \"allow\", \"principal\": [\"[email protected]\", \"[email protected]\"]}]}}'",
"mcg sts assign-role --email <assumed user's username> --role_config '{\"role_name\": \"AllowTwoAssumers\", \"assume_role_policy\": {\"version\": \"2012-10-17\", \"statement\": [ {\"action\": [\"sts:AssumeRole\"], \"effect\": \"allow\", \"principal\": [\"[email protected]\", \"[email protected]\"]}]}}'",
"oc -n openshift-storage get route",
"AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> aws --endpoint-url <mcg-sts-endpoint> sts assume-role --role-arn arn:aws:sts::<assumed-user-access-key-id>:role/<role-name> --role-session-name <role-session-name>",
"AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> AWS_SESSION_TOKEN=<session token> aws --endpoint-url <mcg-s3-endpoint> s3 ls"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/managing_hybrid_and_multicloud_resources/replicating-a-bucket-to-another-bucket_rhodf
|
8.65. hwdata
|
8.65. hwdata 8.65.1. RHBA-2013:1612 - hwdata bug fix and enhancement update An updated hwdata package that fixes one bug and adds various enhancements is now available for Red Hat Enterprise Linux 6. The hwdata package contains tools for accessing and displaying hardware identification and configuration data. Bug Fix BZ# 989142 Previously, certain information about the Red Hat Virtio Small Computer System Interface (SCSI) device was missing from the pci.ids database. Consequently, when using the lspci utility, the device name was not shown correctly and the numeric device ID was shown instead. With this update, the pci.ids database has been modified to provide correct information as expected. Enhancements BZ# 982659 The PCI ID numbers have been updated for the Beta and the Final compose lists. BZ# 739838 With this update, the pci.ids database has been updated with information about AMD FirePro graphic cards. BZ# 948121 With this update, the pci.ids database has been updated with information about the Cisco VIC SR-IOV Virtual Function with the usNIC capability. All users of hwdata are advised to upgrade to this updated package, which fixes this bug and adds these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/hwdata
|
15.24. Comparing Two Directory Server Instances
|
15.24. Comparing Two Directory Server Instances In certain situations, an administrator wants to compare if two Directory Servers are synchronized. The ds-replcheck utility enables you to compare two online servers. Alternatively, ds-replcheck can compare two LDIF-formatted files in offline mode and two servers in online mode. Note To compare two databases offline, export them using the db2ldif -r command to include replication state information. If you compare two online servers, the contents of the databases usually differ, if they are under heavy load. To work around this problem, the script uses a lag time value in by passing the -l time_in_seconds parameter to ds-replcheck . By default, this value is set to 300 seconds (5 minutes). If the utility detects an inconsistency that is within the lag time, it is not reported. This helps to reduce false positives. By default, if you excluded certain attributes in the replication agreement from being replicated, ds-replcheck reports these attributes as different. To ignore these attributes, pass the -i attribute_list parameter to the utility. For example, to compare the dc=example,dc=com suffix of two Directory Servers: The output of the utility contains the following sections: Database RUV's Lists the Replication Update Vectors (RUV) of the databases including the minimum and maximum Change Sequence Numbers (CSN). For example: Entry Count Displays the total number of entries on the both servers, including tombstone entries. For example: Tombstones Displays the number of tombstone entries on each replica. These entries are added to the total entry count. For example: Conflict Entries Lists the Distinguished Names (DN) of each conflict entry, the conflict type, and the date it was created. For example: Missing Entries Lists the DNs of each missing entry and the creation date from the other server where the entry resides. For example: Entry Inconsistencies Lists the DNs of the entry that contain attributes that are different to those on the other server. If a state information is available, it is also displayed. If no state information for an attribute is available, it is listed as an origin value. This means that the value was not updated since the replication was initialized for the first time. For example:
|
[
"ds-replcheck -D \"cn=Directory Manager\" -W -m ldap://server1.example.com:389 -r ldap://server2.example.com:389 -b \" dc=example,dc=com \"",
"Supplier RUV: {replica 1 ldap://server1.example.com:389} 58e53b92000200010000 58e6ab46000000010000 {replica 2 ldap://server2.example.com:389} 58e53baa000000020000 58e69d7e000000020000 {replicageneration} 58e53b7a000000010000 Replica RUV: {replica 1 ldap://server1.example.com:389} 58e53ba1000000010000 58e6ab46000000010000 {replica 2 ldap://server2.example.com:389} 58e53baa000000020000 58e7e8a3000000020000 {replicageneration} 58e53b7a000000010000",
"Supplier: 12 Replica: 10",
"Supplier: 4 Replica: 2",
"Supplier Conflict Entries: 1 - nsuniqueid=48177227-2ab611e7-afcb801a-ecef6d49+uid= user1 ,dc=example,dc=com - Conflict: namingConflict (add) uid= user1 ,dc=example,dc=com - Glue entry: no - Created: Wed Apr 26 20:27:40 2017 Replica Conflict Entries: 1 - nsuniqueid=48177227-2ab611e7-afcb801a-ecef6d49+uid= user1 ,dc=example,dc=com - Conflict: namingConflict (add) uid= user1 ,dc=example,dc=com - Glue entry: no - Created: Wed Apr 26 20:27:40 2017",
"Entries missing on Supplier: - uid= user2 ,dc=example,dc=com (Created on Replica at: Wed Apr 12 14:43:24 2017) - uid= user3 ,dc=example,dc=com (Created on Replica at: Wed Apr 12 14:43:24 2017) Entries missing on Replica: - uid= user4 ,dc=example,dc=com (Created on Supplier at: Wed Apr 12 14:43:24 2017)",
"cn= group1 ,dc=example,dc=com --------------------------- Replica missing attribute \"objectclass\": - Supplier's State Info: objectClass;vucsn-58e53baa000000020000: top - Date: Wed Apr 5 14:47:06 2017 - Supplier's State Info: objectClass;vucsn-58e53baa000000020000: groupofuniquenames - Date: Wed Apr 5 14:47:06 2017"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/comparing_two_directory_server_databases
|
5.280. resource-agents
|
5.280. resource-agents 5.280.1. RHBA-2012:1419 - resource-agents bug fix update Updated resource-agents packages that fix a bug are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability environment for both the Pacemaker and rgmanager service managers. Bug Fix BZ# 864364 If the contents of the /proc/mounts file changed during a status check operation of the fs.sh file system resource agent, the status check could incorrectly detect that a mount was missing and mark a service as failed. This bug has been fixed and fs.sh no longer reports false failures in the described scenario. All users of resource-agents are advised to upgrade to these updated packages, which fix this bug. 5.280.2. RHBA-2012:1515 - resource-agents bug fix update Updated resource-agents packages that fix a bug are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability (HA) environment for both the Pacemaker and rgmanager service managers. Bug Fix BZ# 878023 Previously, when device failures caused logical volumes to go missing, HA LVM was unable to shut down. With this update, services can migrate to other machines that still have access to the devices, thus preventing this bug. All users of resource-agents are advised to upgrade to these updated packages, which fix this bug. 5.280.3. RHBA-2012:0947 - resource-agents bug fix and enhancement update Updated resource-agents packages that fix multiple bugs and add three enhancements are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability environment for both Pacemaker and rgmanager service managers. Bug Fixes BZ# 728086 Prior to this update, the fs-lib.sh resource agent library ignored the error codes greater than '1'. As a consequence, fs-lib.sh failed to recognize errors when a mount returned an error with a different error code, for example an iSCSI mount. This update modifies the underlying code so that the fs-lib.sh resource agent library now recognizes all errors as expected. BZ# 742859 Prior to this update, the Apache resource agent did not correctly generate the IPv6 configuration for the configuration file. As a consequence, Apache failed to work with IPv6 addresses. This update modifies the underlying code so that the Apache resource agent now generates a valid configuration file when IPv6 is in use. BZ# 746996 Prior to this update, the SAP Web Dispatcher and the TREX service were not monitored in the SAP resource agent script. This update adds the SAP Web Dispatcher and the TREX Service to the list of services that are checked for SAP. Now, the SAP Web Dispatcher and the TREX Service are monitored. BZ# 749713 Prior to this update, missing etab entries were not recreated due to an error in a regular expression and an incorrect flag on the "clufindhostname" command. As a consequence, NFS exports were not automatically recovered. This update corrects the regular expression and uses the "clufindhostname" command as expected. Now, NFS exports recover automatically when entries are removed from the etab file. BZ# 784357 Prior to this update, the configuration path variable for the resource agent was not correctly set. As a consequence, the wrong path for configuration files was used. This update modifies the configuration path variable so that the common configuration directory is now correctly set to prevent problems with the resource agents for Samba, Apache, and others. BZ# 799998 Prior to this update, the "netfs" script did not identify whether the file systems to be checked were network file systems before denying multiple mounts. As a consequence, network file systems could not be added twice. This update modifies the "netfs" script so that it verifies if file systems are network file systems and allows multiple mounts for these. Now, multiple mounts of the same network file system are allowed. Enhancements BZ# 712174 Prior to this update, no option to set tunnelled migrations with the Kernel-based Virtual Machine (KVM) was available. This update adds the "--tunnelled" option to the vm.sh resource agent to allow encrypted migrations between qemu virtual machines. BZ# 726500 Prior to this update, the SAP resource agent scripts did not reflect changes in the upstream version. This update merges Pacemaker and the Heartbeat SAP resource agent with the upstream version. BZ# 784209 The SAP database resource agent has been synchronized with the upstream resource agent to provide additional functionality and bug fixes. All users of resource-agents are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/resource-agents
|
31.3. Loading a Module
|
31.3. Loading a Module To load a kernel module, run the modprobe <module_name> command as root. For example, to load the wacom module, run: By default, modprobe attempts to load the module from the /lib/modules/ <kernel_version> /kernel/drivers/ directory. In this directory, each type of module has its own subdirectory, such as net/ and scsi/ , for network and SCSI interface drivers respectively. Some modules have dependencies, which are other kernel modules that must be loaded before the module in question can be loaded. A list of module dependencies is generated and maintained by the depmod program that is run automatically whenever a kernel or driver package is installed on the system. The depmod program keeps the list of dependencies in the /lib/modules/<kernel_version>/modules.dep file. The modprobe command always reads the modules.dep file when performing operations. When you ask modprobe to load a specific kernel module, it first examines the dependencies of that module, if there are any, and loads them if they are not already loaded into the kernel. modprobe resolves dependencies recursively: If necessary, it loads all dependencies of dependencies, and so on, thus ensuring that all dependencies are always met. You can use the -v (or --verbose ) option to cause modprobe to display detailed information about what it is doing, which may include loading module dependencies. The following is an example of loading the Fibre Channel over Ethernet module verbosely: Example 31.3. modprobe -v shows module dependencies as they are loaded This example shows that modprobe loaded the scsi_tgt , scsi_transport_fc , libfc and libfcoe modules as dependencies before finally loading fcoe . Also note that modprobe used the more " primitive " insmod command to insert the modules into the running kernel. Important Although the insmod command can also be used to load kernel modules, it does not resolve dependencies. Because of this, you should always load modules using modprobe instead.
|
[
"~]# modprobe wacom",
"~]# modprobe -v fcoe insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/scsi_tgt.ko insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/scsi_transport_fc.ko insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/libfc/libfc.ko insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/fcoe/libfcoe.ko insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/fcoe/fcoe.ko"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Loading_a_Module
|
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster
|
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on AWS cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space may vary when encryption is enabled or replica 2 pools are being used. 3.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 3.2. Scaling out storage capacity on a AWS cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 3.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While we recommend adding nodes in the multiple of three, you still get the flexibility of adding one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 3.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, expand the cluster first using the instructions that can be found here . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.1.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage by adding capacity .
|
[
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/scaling_storage/scaling_storage_capacity_of_aws_openshift_data_foundation_cluster
|
Chapter 5. Changing the CA trust flags
|
Chapter 5. Changing the CA trust flags The certificate authority (CA) trust flags define for which scenarios Directory Server trusts a CA certificate. For example, you set the flags to trust the certificate for TLS connections to the server and for certificate-based authentication. 5.1. Changing the CA trust flags using the command line You can set the following trust flags on a certificate authority (CA) certificate: C : Trusted CA T : Trusted CA client authentication c : Valid CA P : Trusted peer p : Valid peer u : Private key You specify the trust flags comma-separated in three categories: TLS, email, object signing For example, to trust the CA for TLS encryption and certificate-based authentication, set the trust flags to CT,, . Prerequisites You imported a CA certificate to the network security services (NSS) database. Procedure Use the following command to change the trust flags of a CA certificate: # dsconf -D " cn=Directory Manager " ldap://server.example.com security ca-certificate set-trust-flags " Example CA " --flags " trust_flags " Verification Display all certificates in the NSS database: # certutil -d /etc/dirsrv/slapd- instance_name / -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Example CA CT,, Additional resources The certutil(1) man page 5.2. Changing the CA trust flags using the web console You can use the web console to change the CA trust flags. Prerequisites You imported a CA certificate to the network security services (NSS) database. Procedure Navigate to Server Security Certificate Management Trusted Certificate Authorities . Click ... icon to the CA certificate, and select Edit Trust Flags . Select the trust flags. Click Save Verification Navigate to Server Security Certificate Management Trusted Certificate Authorities . Click > to the CA certificate to display the trust flags.
|
[
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com security ca-certificate set-trust-flags \" Example CA \" --flags \" trust_flags \"",
"certutil -d /etc/dirsrv/slapd- instance_name / -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Example CA CT,,"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_changing-the-ca-trust-flagssecuring-rhds
|
Chapter 134. KafkaConnectorSpec schema reference
|
Chapter 134. KafkaConnectorSpec schema reference Used in: KafkaConnector Property Property type Description class string The Class for the Kafka Connector. tasksMax integer The maximum number of tasks for the Kafka Connector. autoRestart AutoRestart Automatic restart of connector and tasks configuration. config map The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. pause boolean The pause property has been deprecated. Deprecated in Streams for Apache Kafka 2.6, use state instead. Whether the connector should be paused. Defaults to false. state string (one of [running, paused, stopped]) The state the connector should be in. Defaults to running.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaConnectorSpec-reference
|
Chapter 9. DeploymentConfig [apps.openshift.io/v1]
|
Chapter 9. DeploymentConfig [apps.openshift.io/v1] Description Deployment Configs define the template for a pod and manages deploying new images or configuration changes. A single deployment configuration is usually analogous to a single micro-service. Can support many different deployment patterns, including full restart, customizable rolling updates, and fully custom behaviors, as well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller. A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed. Triggers can be disabled to allow manual control over a deployment. The "strategy" determines how the deployment is carried out and may be changed at any time. The latestVersion field is updated when a new deployment is triggered by any means. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object DeploymentConfigSpec represents the desired state of the deployment. status object DeploymentConfigStatus represents the current deployment state. 9.1.1. .spec Description DeploymentConfigSpec represents the desired state of the deployment. Type object Property Type Description minReadySeconds integer MinReadySeconds is the minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Paused indicates that the deployment config is paused resulting in no new deployments on template changes or changes in the template caused by other triggers. replicas integer Replicas is the number of desired replicas. revisionHistoryLimit integer RevisionHistoryLimit is the number of old ReplicationControllers to retain to allow for rollbacks. This field is a pointer to allow for differentiation between an explicit zero and not specified. Defaults to 10. (This only applies to DeploymentConfigs created via the new group API resource, not the legacy resource.) selector object (string) Selector is a label query over pods that should match the Replicas count. strategy object DeploymentStrategy describes how to perform a deployment. template PodTemplateSpec Template is the object that describes the pod that will be created if insufficient replicas are detected. test boolean Test ensures that this deployment config will have zero replicas except while a deployment is running. This allows the deployment config to be used as a continuous deployment test - triggering on images, running the deployment, and then succeeding or failing. Post strategy hooks and After actions can be used to integrate successful deployment with an action. triggers array Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers are defined, a new deployment can only occur as a result of an explicit client update to the DeploymentConfig with a new LatestVersion. If null, defaults to having a config change trigger. triggers[] object DeploymentTriggerPolicy describes a policy for a single trigger that results in a new deployment. 9.1.2. .spec.strategy Description DeploymentStrategy describes how to perform a deployment. Type object Property Type Description activeDeadlineSeconds integer ActiveDeadlineSeconds is the duration in seconds that the deployer pods for this deployment config may be active on a node before the system actively tries to terminate them. annotations object (string) Annotations is a set of key, value pairs added to custom deployer and lifecycle pre/post hook pods. customParams object CustomDeploymentStrategyParams are the input to the Custom deployment strategy. labels object (string) Labels is a set of key, value pairs added to custom deployer and lifecycle pre/post hook pods. recreateParams object RecreateDeploymentStrategyParams are the input to the Recreate deployment strategy. resources ResourceRequirements Resources contains resource requirements to execute the deployment and any hooks. rollingParams object RollingDeploymentStrategyParams are the input to the Rolling deployment strategy. type string Type is the name of a deployment strategy. 9.1.3. .spec.strategy.customParams Description CustomDeploymentStrategyParams are the input to the Custom deployment strategy. Type object Property Type Description command array (string) Command is optional and overrides CMD in the container Image. environment array (EnvVar) Environment holds the environment which will be given to the container for Image. image string Image specifies a container image which can carry out a deployment. 9.1.4. .spec.strategy.recreateParams Description RecreateDeploymentStrategyParams are the input to the Recreate deployment strategy. Type object Property Type Description mid object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. post object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. pre object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. timeoutSeconds integer TimeoutSeconds is the time to wait for updates before giving up. If the value is nil, a default will be used. 9.1.5. .spec.strategy.recreateParams.mid Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.6. .spec.strategy.recreateParams.mid.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.7. .spec.strategy.recreateParams.mid.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.8. .spec.strategy.recreateParams.mid.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.9. .spec.strategy.recreateParams.post Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.10. .spec.strategy.recreateParams.post.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.11. .spec.strategy.recreateParams.post.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.12. .spec.strategy.recreateParams.post.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.13. .spec.strategy.recreateParams.pre Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.14. .spec.strategy.recreateParams.pre.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.15. .spec.strategy.recreateParams.pre.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.16. .spec.strategy.recreateParams.pre.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.17. .spec.strategy.rollingParams Description RollingDeploymentStrategyParams are the input to the Rolling deployment strategy. Type object Property Type Description intervalSeconds integer IntervalSeconds is the time to wait between polling deployment status after update. If the value is nil, a default will be used. maxSurge IntOrString MaxSurge is the maximum number of pods that can be scheduled above the original number of pods. Value can be an absolute number (ex: 5) or a percentage of total pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxUnavailable is 0. By default, 25% is used. Example: when this is set to 30%, the new RC can be scaled up by 30% immediately when the rolling update starts. Once old pods have been killed, new RC can be scaled up further, ensuring that total number of pods running at any time during the update is atmost 130% of original pods. maxUnavailable IntOrString MaxUnavailable is the maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total pods at the start of update (ex: 10%). Absolute number is calculated from percentage by rounding down. This cannot be 0 if MaxSurge is 0. By default, 25% is used. Example: when this is set to 30%, the old RC can be scaled down by 30% immediately when the rolling update starts. Once new pods are ready, old RC can be scaled down further, followed by scaling up the new RC, ensuring that at least 70% of original number of pods are available at all times during the update. post object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. pre object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. timeoutSeconds integer TimeoutSeconds is the time to wait for updates before giving up. If the value is nil, a default will be used. updatePeriodSeconds integer UpdatePeriodSeconds is the time to wait between individual pod updates. If the value is nil, a default will be used. 9.1.18. .spec.strategy.rollingParams.post Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.19. .spec.strategy.rollingParams.post.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.20. .spec.strategy.rollingParams.post.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.21. .spec.strategy.rollingParams.post.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.22. .spec.strategy.rollingParams.pre Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.23. .spec.strategy.rollingParams.pre.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.24. .spec.strategy.rollingParams.pre.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.25. .spec.strategy.rollingParams.pre.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.26. .spec.triggers Description Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers are defined, a new deployment can only occur as a result of an explicit client update to the DeploymentConfig with a new LatestVersion. If null, defaults to having a config change trigger. Type array 9.1.27. .spec.triggers[] Description DeploymentTriggerPolicy describes a policy for a single trigger that results in a new deployment. Type object Property Type Description imageChangeParams object DeploymentTriggerImageChangeParams represents the parameters to the ImageChange trigger. type string Type of the trigger 9.1.28. .spec.triggers[].imageChangeParams Description DeploymentTriggerImageChangeParams represents the parameters to the ImageChange trigger. Type object Required from Property Type Description automatic boolean Automatic means that the detection of a new tag value should result in an image update inside the pod template. containerNames array (string) ContainerNames is used to restrict tag updates to the specified set of container names in a pod. If multiple triggers point to the same containers, the resulting behavior is undefined. Future API versions will make this a validation error. If ContainerNames does not point to a valid container, the trigger will be ignored. Future API versions will make this a validation error. from ObjectReference From is a reference to an image stream tag to watch for changes. From.Name is the only required subfield - if From.Namespace is blank, the namespace of the current deployment trigger will be used. lastTriggeredImage string LastTriggeredImage is the last image to be triggered. 9.1.29. .status Description DeploymentConfigStatus represents the current deployment state. Type object Required latestVersion observedGeneration replicas updatedReplicas availableReplicas unavailableReplicas Property Type Description availableReplicas integer AvailableReplicas is the total number of available pods targeted by this deployment config. conditions array Conditions represents the latest available observations of a deployment config's current state. conditions[] object DeploymentCondition describes the state of a deployment config at a certain point. details object DeploymentDetails captures information about the causes of a deployment. latestVersion integer LatestVersion is used to determine whether the current deployment associated with a deployment config is out of sync. observedGeneration integer ObservedGeneration is the most recent generation observed by the deployment config controller. readyReplicas integer Total number of ready pods targeted by this deployment. replicas integer Replicas is the total number of pods targeted by this deployment config. unavailableReplicas integer UnavailableReplicas is the total number of unavailable pods targeted by this deployment config. updatedReplicas integer UpdatedReplicas is the total number of non-terminated pods targeted by this deployment config that have the desired template spec. 9.1.30. .status.conditions Description Conditions represents the latest available observations of a deployment config's current state. Type array 9.1.31. .status.conditions[] Description DeploymentCondition describes the state of a deployment config at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of deployment condition. 9.1.32. .status.details Description DeploymentDetails captures information about the causes of a deployment. Type object Required causes Property Type Description causes array Causes are extended data associated with all the causes for creating a new deployment causes[] object DeploymentCause captures information about a particular cause of a deployment. message string Message is the user specified change message, if this deployment was triggered manually by the user 9.1.33. .status.details.causes Description Causes are extended data associated with all the causes for creating a new deployment Type array 9.1.34. .status.details.causes[] Description DeploymentCause captures information about a particular cause of a deployment. Type object Required type Property Type Description imageTrigger object DeploymentCauseImageTrigger represents details about the cause of a deployment originating from an image change trigger type string Type of the trigger that resulted in the creation of a new deployment 9.1.35. .status.details.causes[].imageTrigger Description DeploymentCauseImageTrigger represents details about the cause of a deployment originating from an image change trigger Type object Required from Property Type Description from ObjectReference From is a reference to the changed object which triggered a deployment. The field may have the kinds DockerImage, ImageStreamTag, or ImageStreamImage. 9.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/deploymentconfigs GET : list or watch objects of kind DeploymentConfig /apis/apps.openshift.io/v1/watch/deploymentconfigs GET : watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs DELETE : delete collection of DeploymentConfig GET : list or watch objects of kind DeploymentConfig POST : create a DeploymentConfig /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs GET : watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name} DELETE : delete a DeploymentConfig GET : read the specified DeploymentConfig PATCH : partially update the specified DeploymentConfig PUT : replace the specified DeploymentConfig /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs/{name} GET : watch changes to an object of kind DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/status GET : read status of the specified DeploymentConfig PATCH : partially update status of the specified DeploymentConfig PUT : replace status of the specified DeploymentConfig 9.2.1. /apis/apps.openshift.io/v1/deploymentconfigs Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind DeploymentConfig Table 9.2. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigList schema 401 - Unauthorized Empty 9.2.2. /apis/apps.openshift.io/v1/watch/deploymentconfigs Table 9.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 9.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs Table 9.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DeploymentConfig Table 9.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 9.8. Body parameters Parameter Type Description body DeleteOptions schema Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind DeploymentConfig Table 9.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a DeploymentConfig Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 202 - Accepted DeploymentConfig schema 401 - Unauthorized Empty 9.2.4. /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs Table 9.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 9.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.5. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name} Table 9.18. Global path parameters Parameter Type Description name string name of the DeploymentConfig namespace string object name and auth scope, such as for teams and projects Table 9.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DeploymentConfig Table 9.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.21. Body parameters Parameter Type Description body DeleteOptions schema Table 9.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DeploymentConfig Table 9.23. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DeploymentConfig Table 9.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.25. Body parameters Parameter Type Description body Patch schema Table 9.26. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DeploymentConfig Table 9.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.28. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.29. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty 9.2.6. /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs/{name} Table 9.30. Global path parameters Parameter Type Description name string name of the DeploymentConfig namespace string object name and auth scope, such as for teams and projects Table 9.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.7. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/status Table 9.33. Global path parameters Parameter Type Description name string name of the DeploymentConfig namespace string object name and auth scope, such as for teams and projects Table 9.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DeploymentConfig Table 9.35. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DeploymentConfig Table 9.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.37. Body parameters Parameter Type Description body Patch schema Table 9.38. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DeploymentConfig Table 9.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.40. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.41. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/deploymentconfig-apps-openshift-io-v1
|
Chapter 2. Installation
|
Chapter 2. Installation This chapter guides you through the steps to install AMQ Python in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To install packages on Red Hat Enterprise Linux, you must register your system . To use AMQ Python, you must install Python in your environment. 2.2. Installing on Red Hat Enterprise Linux Procedure Use the subscription-manager command to subscribe to the required package repositories. If necessary, replace <variant> with the value for your variant of Red Hat Enterprise Linux (for example, server or workstation ). Red Hat Enterprise Linux 6 USD sudo subscription-manager repos --enable=amq-clients-2-for-rhel-6- <variant> -rpms Red Hat Enterprise Linux 7 USD sudo subscription-manager repos --enable=amq-clients-2-for-rhel-7- <variant> -rpms Red Hat Enterprise Linux 8 USD sudo subscription-manager repos --enable=amq-clients-2-for-rhel-8-x86_64-rpms Use the yum command to install the python-qpid-proton and python-qpid-proton-docs packages. USD sudo yum install python-qpid-proton python-qpid-proton-docs For more information about using packages, see Appendix B, Using Red Hat Enterprise Linux packages . 2.3. Installing on Microsoft Windows Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Clients entry in the INTEGRATION AND AUTOMATION category. Click Red Hat AMQ Clients . The Software Downloads page opens. Download the AMQ Clients 2.8.0 Python .whl file for your Python version. Python 3.6 python_qpid_proton-0.32.0-cp36-cp36m-win_amd64.whl Python 3.8 python_qpid_proton-0.32.0-cp38-cp38-win_amd64.whl Open a command prompt window and use the pip install command to install the .whl file. Python 3.6 > pip install python_qpid_proton-0.32.0-cp36-cp36m-win_amd64.whl Python 3.8 > pip install python_qpid_proton-0.32.0-cp38-cp38-win_amd64.whl
|
[
"sudo subscription-manager repos --enable=amq-clients-2-for-rhel-6- <variant> -rpms",
"sudo subscription-manager repos --enable=amq-clients-2-for-rhel-7- <variant> -rpms",
"sudo subscription-manager repos --enable=amq-clients-2-for-rhel-8-x86_64-rpms",
"sudo yum install python-qpid-proton python-qpid-proton-docs",
"> pip install python_qpid_proton-0.32.0-cp36-cp36m-win_amd64.whl",
"> pip install python_qpid_proton-0.32.0-cp38-cp38-win_amd64.whl"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_python_client/installation
|
Chapter 11. ClustersService
|
Chapter 11. ClustersService 11.1. GetClusterDefaultValues GET /v1/cluster-defaults 11.1.1. Description 11.1.2. Parameters 11.1.3. Return Type V1ClusterDefaultsResponse 11.1.4. Content Type application/json 11.1.5. Responses Table 11.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterDefaultsResponse 0 An unexpected error response. RuntimeError 11.1.6. Samples 11.1.7. Common object reference 11.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 11.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 11.1.7.3. V1ClusterDefaultsResponse Field Name Required Nullable Type Description Format mainImageRepository String collectorImageRepository String kernelSupportAvailable Boolean 11.2. GetKernelSupportAvailable GET /v1/clusters-env/kernel-support-available GetKernelSupportAvailable is deprecated in favor of GetClusterDefaultValues. 11.2.1. Description 11.2.2. Parameters 11.2.3. Return Type V1KernelSupportAvailableResponse 11.2.4. Content Type application/json 11.2.5. Responses Table 11.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1KernelSupportAvailableResponse 0 An unexpected error response. RuntimeError 11.2.6. Samples 11.2.7. Common object reference 11.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 11.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 11.2.7.3. V1KernelSupportAvailableResponse Field Name Required Nullable Type Description Format kernelSupportAvailable Boolean 11.3. GetClusters GET /v1/clusters 11.3.1. Description 11.3.2. Parameters 11.3.2.1. Query Parameters Name Description Required Default Pattern query - null 11.3.3. Return Type V1ClustersList 11.3.4. Content Type application/json 11.3.5. Responses Table 11.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClustersList 0 An unexpected error response. RuntimeError 11.3.6. Samples 11.3.7. Common object reference 11.3.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.3.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.3.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.3.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.3.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 11.3.7.5. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 11.3.7.6. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.3.7.7. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.3.7.8. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.3.7.9. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.3.7.10. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.3.7.11. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.3.7.12. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.3.7.13. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.3.7.14. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.3.7.15. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.3.7.16. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.3.7.17. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.3.7.18. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.3.7.19. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.3.7.20. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.3.7.21. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.3.7.22. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.3.7.23. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.3.7.24. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.3.7.25. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.3.7.26. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.3.7.27. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.3.7.28. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.3.7.29. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.3.7.30. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.3.7.31. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.3.7.32. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.3.7.33. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.3.7.34. V1ClustersList Field Name Required Nullable Type Description Format clusters List of StorageCluster clusterIdToRetentionInfo Map of V1DecommissionedClusterRetentionInfo 11.3.7.35. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32 11.4. DeleteCluster DELETE /v1/clusters/{id} 11.4.1. Description 11.4.2. Parameters 11.4.2.1. Path Parameters Name Description Required Default Pattern id X null 11.4.3. Return Type Object 11.4.4. Content Type application/json 11.4.5. Responses Table 11.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 11.4.6. Samples 11.4.7. Common object reference 11.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 11.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 11.5. GetCluster GET /v1/clusters/{id} 11.5.1. Description 11.5.2. Parameters 11.5.2.1. Path Parameters Name Description Required Default Pattern id X null 11.5.3. Return Type V1ClusterResponse 11.5.4. Content Type application/json 11.5.5. Responses Table 11.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterResponse 0 An unexpected error response. RuntimeError 11.5.6. Samples 11.5.7. Common object reference 11.5.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.5.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.5.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.5.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.5.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 11.5.7.5. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 11.5.7.6. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.5.7.7. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.5.7.8. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.5.7.9. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.5.7.10. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.5.7.11. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.5.7.12. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.5.7.13. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.5.7.14. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.5.7.15. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.5.7.16. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.5.7.17. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.5.7.18. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.5.7.19. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.5.7.20. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.5.7.21. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.5.7.22. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.5.7.23. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.5.7.24. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.5.7.25. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.5.7.26. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.5.7.27. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.5.7.28. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.5.7.29. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.5.7.30. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.5.7.31. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.5.7.32. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.5.7.33. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.5.7.34. V1ClusterResponse Field Name Required Nullable Type Description Format cluster StorageCluster clusterRetentionInfo V1DecommissionedClusterRetentionInfo 11.5.7.35. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32 11.6. PutCluster PUT /v1/clusters/{id} 11.6.1. Description 11.6.2. Parameters 11.6.2.1. Path Parameters Name Description Required Default Pattern id X null 11.6.2.2. Body Parameter Name Description Required Default Pattern body StorageCluster X 11.6.3. Return Type V1ClusterResponse 11.6.4. Content Type application/json 11.6.5. Responses Table 11.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterResponse 0 An unexpected error response. RuntimeError 11.6.6. Samples 11.6.7. Common object reference 11.6.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.6.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.6.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.6.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.6.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 11.6.7.5. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 11.6.7.6. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.6.7.7. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.6.7.8. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.6.7.9. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.6.7.10. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.6.7.11. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.6.7.12. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.6.7.13. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.6.7.14. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.6.7.15. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.6.7.16. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.6.7.17. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.6.7.18. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.6.7.19. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.6.7.20. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.6.7.21. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.6.7.22. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.6.7.23. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.6.7.24. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.6.7.25. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.6.7.26. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.6.7.27. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.6.7.28. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.6.7.29. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.6.7.30. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.6.7.31. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.6.7.32. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.6.7.33. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.6.7.34. V1ClusterResponse Field Name Required Nullable Type Description Format cluster StorageCluster clusterRetentionInfo V1DecommissionedClusterRetentionInfo 11.6.7.35. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32 11.7. PostCluster POST /v1/clusters 11.7.1. Description 11.7.2. Parameters 11.7.2.1. Body Parameter Name Description Required Default Pattern body StorageCluster X 11.7.3. Return Type V1ClusterResponse 11.7.4. Content Type application/json 11.7.5. Responses Table 11.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1ClusterResponse 0 An unexpected error response. RuntimeError 11.7.6. Samples 11.7.7. Common object reference 11.7.7.1. ClusterHealthStatusHealthStatusLabel UNAVAILABLE: Only collector can have unavailable status Enum Values UNINITIALIZED UNAVAILABLE UNHEALTHY DEGRADED HEALTHY 11.7.7.2. ClusterUpgradeStatusUpgradability SENSOR_VERSION_HIGHER: SENSOR_VERSION_HIGHER occurs when we detect that the sensor is running a newer version than this Central. This is unexpected, but can occur depending on the patches a customer does. In this case, we will NOT automatically "upgrade" the sensor, since that would be a downgrade, even if the autoupgrade setting is on. The user will be allowed to manually trigger the upgrade, but they are strongly discouraged from doing so without upgrading Central first, since this is an unsupported configuration. Enum Values UNSET UP_TO_DATE MANUAL_UPGRADE_REQUIRED AUTO_UPGRADE_POSSIBLE SENSOR_VERSION_HIGHER 11.7.7.3. ClusterUpgradeStatusUpgradeProcessStatus Field Name Required Nullable Type Description Format active Boolean id String targetVersion String upgraderImage String initiatedAt Date date-time progress StorageUpgradeProgress type UpgradeProcessStatusUpgradeProcessType UPGRADE, CERT_ROTATION, 11.7.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 11.7.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 11.7.7.5. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 11.7.7.6. StorageAWSProviderMetadata Field Name Required Nullable Type Description Format accountId String 11.7.7.7. StorageAdmissionControlHealthInfo AdmissionControlHealthInfo carries data about admission control deployment but does not include admission control health status derived from this data. Aggregated admission control health status is not included because it is derived in central and not in the component that first reports AdmissionControlHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredPods Integer int32 totalReadyPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain admission control health info. 11.7.7.8. StorageAdmissionControllerConfig Field Name Required Nullable Type Description Format enabled Boolean timeoutSeconds Integer int32 scanInline Boolean disableBypass Boolean enforceOnUpdates Boolean 11.7.7.9. StorageAuditLogFileState Field Name Required Nullable Type Description Format collectLogsSince Date date-time lastAuditId String 11.7.7.10. StorageAzureProviderMetadata Field Name Required Nullable Type Description Format subscriptionId String 11.7.7.11. StorageCluster Field Name Required Nullable Type Description Format id String name String type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, labels Map of string mainImage String collectorImage String centralApiEndpoint String runtimeSupport Boolean collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, admissionController Boolean admissionControllerUpdates Boolean admissionControllerEvents Boolean status StorageClusterStatus dynamicConfig StorageDynamicClusterConfig tolerationsConfig StorageTolerationsConfig priority String int64 healthStatus StorageClusterHealthStatus slimCollector Boolean helmConfig StorageCompleteClusterConfig mostRecentSensorId StorageSensorDeploymentIdentification auditLogState Map of StorageAuditLogFileState For internal use only. initBundleId String managedBy StorageManagerType MANAGER_TYPE_UNKNOWN, MANAGER_TYPE_MANUAL, MANAGER_TYPE_HELM_CHART, MANAGER_TYPE_KUBERNETES_OPERATOR, 11.7.7.12. StorageClusterCertExpiryStatus Field Name Required Nullable Type Description Format sensorCertExpiry Date date-time sensorCertNotBefore Date date-time 11.7.7.13. StorageClusterHealthStatus Field Name Required Nullable Type Description Format id String collectorHealthInfo StorageCollectorHealthInfo admissionControlHealthInfo StorageAdmissionControlHealthInfo scannerHealthInfo StorageScannerHealthInfo sensorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, collectorHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, overallHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, admissionControlHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, scannerHealthStatus ClusterHealthStatusHealthStatusLabel UNINITIALIZED, UNAVAILABLE, UNHEALTHY, DEGRADED, HEALTHY, lastContact Date date-time healthInfoComplete Boolean 11.7.7.14. StorageClusterMetadata ClusterMetadata contains metadata information about the cluster infrastructure. Field Name Required Nullable Type Description Format type StorageClusterMetadataType UNSPECIFIED, AKS, ARO, EKS, GKE, OCP, OSD, ROSA, name String Name represents the name under which the cluster is registered with the cloud provider. In case of self managed OpenShift it is the name chosen by the OpenShift installer. id String Id represents a unique ID under which the cluster is registered with the cloud provider. Not all cluster types have an id. For all OpenShift clusters, this is the Red Hat cluster_id registered with OCM. 11.7.7.15. StorageClusterMetadataType Enum Values UNSPECIFIED AKS ARO EKS GKE OCP OSD ROSA 11.7.7.16. StorageClusterStatus Field Name Required Nullable Type Description Format sensorVersion String DEPRECATEDLastContact Date This field has been deprecated starting release 49.0. Use healthStatus.lastContact instead. date-time providerMetadata StorageProviderMetadata orchestratorMetadata StorageOrchestratorMetadata upgradeStatus StorageClusterUpgradeStatus certExpiryStatus StorageClusterCertExpiryStatus 11.7.7.17. StorageClusterType Enum Values GENERIC_CLUSTER KUBERNETES_CLUSTER OPENSHIFT_CLUSTER OPENSHIFT4_CLUSTER 11.7.7.18. StorageClusterUpgradeStatus Field Name Required Nullable Type Description Format upgradability ClusterUpgradeStatusUpgradability UNSET, UP_TO_DATE, MANUAL_UPGRADE_REQUIRED, AUTO_UPGRADE_POSSIBLE, SENSOR_VERSION_HIGHER, upgradabilityStatusReason String mostRecentProcess ClusterUpgradeStatusUpgradeProcessStatus 11.7.7.19. StorageCollectionMethod Enum Values UNSET_COLLECTION NO_COLLECTION KERNEL_MODULE EBPF CORE_BPF 11.7.7.20. StorageCollectorHealthInfo CollectorHealthInfo carries data about collector deployment but does not include collector health status derived from this data. Aggregated collector health status is not included because it is derived in central and not in the component that first reports CollectorHealthInfo (sensor). Field Name Required Nullable Type Description Format version String totalDesiredPods Integer int32 totalReadyPods Integer int32 totalRegisteredNodes Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain collector health info. 11.7.7.21. StorageCompleteClusterConfig Encodes a complete cluster configuration minus ID/Name identifiers including static and dynamic settings. Field Name Required Nullable Type Description Format dynamicConfig StorageDynamicClusterConfig staticConfig StorageStaticClusterConfig configFingerprint String clusterLabels Map of string 11.7.7.22. StorageDynamicClusterConfig The difference between Static and Dynamic cluster config is that Dynamic values are sent over the Central to Sensor gRPC connection. This has the benefit of allowing for "hot reloading" of values without restarting Secured cluster components. Field Name Required Nullable Type Description Format admissionControllerConfig StorageAdmissionControllerConfig registryOverride String disableAuditLogs Boolean 11.7.7.23. StorageGoogleProviderMetadata Field Name Required Nullable Type Description Format project String clusterName String Deprecated in favor of providerMetadata.cluster.name. 11.7.7.24. StorageManagerType Enum Values MANAGER_TYPE_UNKNOWN MANAGER_TYPE_MANUAL MANAGER_TYPE_HELM_CHART MANAGER_TYPE_KUBERNETES_OPERATOR 11.7.7.25. StorageOrchestratorMetadata Field Name Required Nullable Type Description Format version String openshiftVersion String buildDate Date date-time apiVersions List of string 11.7.7.26. StorageProviderMetadata Field Name Required Nullable Type Description Format region String zone String google StorageGoogleProviderMetadata aws StorageAWSProviderMetadata azure StorageAzureProviderMetadata verified Boolean cluster StorageClusterMetadata 11.7.7.27. StorageScannerHealthInfo ScannerHealthInfo represents health info of a scanner instance that is deployed on a secured cluster (so called "local scanner"). When the scanner is deployed on a central cluster, the following message is NOT used. ScannerHealthInfo carries data about scanner deployment but does not include scanner health status derived from this data. Aggregated scanner health status is not included because it is derived in central and not in the component that first reports ScannerHealthInfo (sensor). Field Name Required Nullable Type Description Format totalDesiredAnalyzerPods Integer int32 totalReadyAnalyzerPods Integer int32 totalDesiredDbPods Integer int32 totalReadyDbPods Integer int32 statusErrors List of string Collection of errors that occurred while trying to obtain scanner health info. 11.7.7.28. StorageSensorDeploymentIdentification StackRoxDeploymentIdentification aims at uniquely identifying a StackRox Sensor deployment. It is used to determine whether a sensor connection comes from a sensor pod that has restarted or was recreated (possibly after a network partition), or from a deployment in a different namespace or cluster. Field Name Required Nullable Type Description Format systemNamespaceId String defaultNamespaceId String appNamespace String appNamespaceId String appServiceaccountId String k8sNodeName String 11.7.7.29. StorageStaticClusterConfig The difference between Static and Dynamic cluster config is that Static values are not sent over the Central to Sensor gRPC connection. They are used, for example, to generate manifests that can be used to set up the Secured Cluster's k8s components. They are not dynamically reloaded. Field Name Required Nullable Type Description Format type StorageClusterType GENERIC_CLUSTER, KUBERNETES_CLUSTER, OPENSHIFT_CLUSTER, OPENSHIFT4_CLUSTER, mainImage String centralApiEndpoint String collectionMethod StorageCollectionMethod UNSET_COLLECTION, NO_COLLECTION, KERNEL_MODULE, EBPF, CORE_BPF, collectorImage String admissionController Boolean admissionControllerUpdates Boolean tolerationsConfig StorageTolerationsConfig slimCollector Boolean admissionControllerEvents Boolean 11.7.7.30. StorageTolerationsConfig Field Name Required Nullable Type Description Format disabled Boolean 11.7.7.31. StorageUpgradeProgress Field Name Required Nullable Type Description Format upgradeState UpgradeProgressUpgradeState UPGRADE_INITIALIZING, UPGRADER_LAUNCHING, UPGRADER_LAUNCHED, PRE_FLIGHT_CHECKS_COMPLETE, UPGRADE_OPERATIONS_DONE, UPGRADE_COMPLETE, UPGRADE_INITIALIZATION_ERROR, PRE_FLIGHT_CHECKS_FAILED, UPGRADE_ERROR_ROLLING_BACK, UPGRADE_ERROR_ROLLED_BACK, UPGRADE_ERROR_ROLLBACK_FAILED, UPGRADE_ERROR_UNKNOWN, UPGRADE_TIMED_OUT, upgradeStatusDetail String since Date date-time 11.7.7.32. UpgradeProcessStatusUpgradeProcessType UPGRADE: UPGRADE represents a sensor version upgrade. CERT_ROTATION: CERT_ROTATION represents an upgrade process that only rotates the TLS certs used by the cluster, without changing anything else. Enum Values UPGRADE CERT_ROTATION 11.7.7.33. UpgradeProgressUpgradeState UPGRADER_LAUNCHING: In-progress states. UPGRADE_COMPLETE: The success state. PLEASE NUMBER ALL IN-PROGRESS STATES ABOVE THIS AND ALL ERROR STATES BELOW THIS. UPGRADE_INITIALIZATION_ERROR: Error states. Enum Values UPGRADE_INITIALIZING UPGRADER_LAUNCHING UPGRADER_LAUNCHED PRE_FLIGHT_CHECKS_COMPLETE UPGRADE_OPERATIONS_DONE UPGRADE_COMPLETE UPGRADE_INITIALIZATION_ERROR PRE_FLIGHT_CHECKS_FAILED UPGRADE_ERROR_ROLLING_BACK UPGRADE_ERROR_ROLLED_BACK UPGRADE_ERROR_ROLLBACK_FAILED UPGRADE_ERROR_UNKNOWN UPGRADE_TIMED_OUT 11.7.7.34. V1ClusterResponse Field Name Required Nullable Type Description Format cluster StorageCluster clusterRetentionInfo V1DecommissionedClusterRetentionInfo 11.7.7.35. V1DecommissionedClusterRetentionInfo Field Name Required Nullable Type Description Format isExcluded Boolean daysUntilDeletion Integer int32
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"AuditLogFileState tracks the last audit log event timestamp and ID that was collected by Compliance For internal use only",
"next available tag: 3"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/clustersservice
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/monitoring_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf
|
Chapter 16. Guest Virtual Machine Device Configuration
|
Chapter 16. Guest Virtual Machine Device Configuration Red Hat Enterprise Linux 7 supports three classes of devices for guest virtual machines: Emulated devices are purely virtual devices that mimic real hardware, allowing unmodified guest operating systems to work with them using their standard in-box drivers. Virtio devices (also known as paravirtualized ) are purely virtual devices designed to work optimally in a virtual machine. Virtio devices are similar to emulated devices, but non-Linux virtual machines do not include the drivers they require by default. Virtualization management software like the Virtual Machine Manager ( virt-manager ) and the Red Hat Virtualization Hypervisor install these drivers automatically for supported non-Linux guest operating systems. Red Hat Enterprise Linux 7 supports up to 216 virtio devices. For more information, see Chapter 5, KVM Paravirtualized (virtio) Drivers . Assigned devices are physical devices that are exposed to the virtual machine. This method is also known as passthrough . Device assignment allows virtual machines exclusive access to PCI devices for a range of tasks, and allows PCI devices to appear and behave as if they were physically attached to the guest operating system. Red Hat Enterprise Linux 7 supports up to 32 assigned devices per virtual machine. Device assignment is supported on PCIe devices, including select graphics devices . Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts. Red Hat Enterprise Linux 7 supports PCI hot plug of devices exposed as single-function slots to the virtual machine. Single-function host devices and individual functions of multi-function host devices may be configured to enable this. Configurations exposing devices as multi-function PCI slots to the virtual machine are recommended only for non-hotplug applications. For more information on specific devices and related limitations, see Section 23.17, "Devices" . Note Platform support for interrupt remapping is required to fully isolate a guest with assigned devices from the host. Without such support, the host may be vulnerable to interrupt injection attacks from a malicious guest. In an environment where guests are trusted, the admin may opt-in to still allow PCI device assignment using the allow_unsafe_interrupts option to the vfio_iommu_type1 module. This may either be done persistently by adding a .conf file (for example local.conf ) to /etc/modprobe.d containing the following: or dynamically using the sysfs entry to do the same: 16.1. PCI Devices PCI device assignment is only available on hardware platforms supporting either Intel VT-d or AMD IOMMU. These Intel VT-d or AMD IOMMU specifications must be enabled in the host BIOS for PCI device assignment to function. Procedure 16.1. Preparing an Intel system for PCI device assignment Enable the Intel VT-d specifications The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine. These specifications are required to use PCI device assignment with Red Hat Enterprise Linux. The Intel VT-d specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. The terms used to see these specifications can differ between manufacturers; consult your system manufacturer's documentation for the appropriate terms. Activate Intel VT-d in the kernel Activate Intel VT-d in the kernel by adding the intel_iommu=on and iommu=pt parameters to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in the /etc/sysconfig/grub file. The example below is a modified grub file with Intel VT-d activated. Regenerate config file Regenerate /etc/grub2.cfg by running: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. Procedure 16.2. Preparing an AMD system for PCI device assignment Enable the AMD IOMMU specifications The AMD IOMMU specifications are required to use PCI device assignment in Red Hat Enterprise Linux. These specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. Enable IOMMU kernel support Append iommu=pt to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in /etc/sysconfig/grub so that AMD IOMMU specifications are enabled at boot. Regenerate config file Regenerate /etc/grub2.cfg by running: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. Note For further information on IOMMU, see Appendix E, Working with IOMMU Groups . 16.1.1. Assigning a PCI Device with virsh These steps cover assigning a PCI device to a virtual machine on a KVM hypervisor. This example uses a PCIe network controller with the PCI identifier code, pci_0000_01_00_0 , and a fully virtualized guest machine named guest1-rhel7-64 . Procedure 16.3. Assigning a PCI device to a guest virtual machine with virsh Identify the device First, identify the PCI device designated for device assignment to the virtual machine. Use the lspci command to list the available PCI devices. You can refine the output of lspci with grep . This example uses the Ethernet controller highlighted in the following output: This Ethernet controller is shown with the short identifier 00:19.0 . We need to find out the full identifier used by virsh in order to assign this PCI device to a virtual machine. To do so, use the virsh nodedev-list command to list all devices of a particular type ( pci ) that are attached to the host machine. Then look at the output for the string that maps to the short identifier of the device you wish to use. This example shows the string that maps to the Ethernet controller with the short identifier 00:19.0 . Note that the : and . characters are replaced with underscores in the full identifier. Record the PCI device number that maps to the device you want to use; this is required in other steps. Review device information Information on the domain, bus, and function are available from output of the virsh nodedev-dumpxml command: # virsh nodedev-dumpxml pci_0000_00_19_0 <device> <name>pci_0000_00_19_0</name> <parent>computer</parent> <driver> <name>e1000e</name> </driver> <capability type='pci'> <domain>0</domain> <bus>0</bus> <slot>25</slot> <function>0</function> <product id='0x1502'>82579LM Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device> Figure 16.1. Dump contents Note An IOMMU group is determined based on the visibility and isolation of devices from the perspective of the IOMMU. Each IOMMU group may contain one or more devices. When multiple devices are present, all endpoints within the IOMMU group must be claimed for any device within the group to be assigned to a guest. This can be accomplished either by also assigning the extra endpoints to the guest or by detaching them from the host driver using virsh nodedev-detach . Devices contained within a single group may not be split between multiple guests or split between host and guest. Non-endpoint devices such as PCIe root ports, switch ports, and bridges should not be detached from the host drivers and will not interfere with assignment of endpoints. Devices within an IOMMU group can be determined using the iommuGroup section of the virsh nodedev-dumpxml output. Each member of the group is provided in a separate "address" field. This information may also be found in sysfs using the following: An example of the output from this would be: To assign only 0000.01.00.0 to the guest, the unused endpoint should be detached from the host before starting the guest: Determine required configuration details See the output from the virsh nodedev-dumpxml pci_0000_00_19_0 command for the values required for the configuration file. The example device has the following values: bus = 0, slot = 25 and function = 0. The decimal configuration uses those three values: Add configuration details Run virsh edit , specifying the virtual machine name, and add a device entry in the <devices> section to assign the PCI device to the guest virtual machine. For example: <devices> [...] <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> </hostdev> [...] </devices> Figure 16.2. Add PCI device Alternately, run virsh attach-device , specifying the virtual machine name and the guest's XML file: Note PCI devices may include an optional read-only memory (ROM) module , also known as an option ROM or expansion ROM , for delivering device firmware or pre-boot drivers (such as PXE) for the device. Generally, these option ROMs also work in a virtualized environment when using PCI device assignment to attach a physical PCI device to a VM. However, in some cases, the option ROM can be unnecessary, which may cause the VM to boot more slowly, or the pre-boot driver delivered by the device can be incompatible with virtualization, which may cause the guest OS boot to fail. In such cases, Red Hat recommends masking the option ROM from the VM. To do so: On the host, verify that the device to assign has an expansion ROM base address register (BAR). To do so, use the lspci -v command for the device, and check the output for a line that includes the following: Add the <rom bar='off'/> element as a child of the <hostdev> element in the guest's XML configuration: <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> <rom bar='off'/> </hostdev> Start the virtual machine The PCI device should now be successfully assigned to the virtual machine, and accessible to the guest operating system. 16.1.2. Assigning a PCI Device with virt-manager PCI devices can be added to guest virtual machines using the graphical virt-manager tool. The following procedure adds a Gigabit Ethernet controller to a guest virtual machine. Procedure 16.4. Assigning a PCI device to a guest virtual machine using virt-manager Open the hardware settings Open the guest virtual machine and click the Add Hardware button to add a new device to the virtual machine. Figure 16.3. The virtual machine hardware information window Select a PCI device Select PCI Host Device from the Hardware list on the left. Select an unused PCI device. Note that selecting PCI devices presently in use by another guest causes errors. In this example, a spare audio controller is used. Click Finish to complete setup. Figure 16.4. The Add new virtual hardware wizard Add the new device The setup is complete and the guest virtual machine now has direct access to the PCI device. Figure 16.5. The virtual machine hardware information window Note If device assignment fails, there may be other endpoints in the same IOMMU group that are still attached to the host. There is no way to retrieve group information using virt-manager, but virsh commands can be used to analyze the bounds of the IOMMU group and if necessary sequester devices. See the Note in Section 16.1.1, "Assigning a PCI Device with virsh" for more information on IOMMU groups and how to detach endpoint devices using virsh. 16.1.3. PCI Device Assignment with virt-install It is possible to assign a PCI device when installing a guest using the virt-install command. To do this, use the --host-device parameter. Procedure 16.5. Assigning a PCI device to a virtual machine with virt-install Identify the device Identify the PCI device designated for device assignment to the guest virtual machine. The virsh nodedev-list command lists all devices attached to the system, and identifies each PCI device with a string. To limit output to only PCI devices, enter the following command: Record the PCI device number; the number is needed in other steps. Information on the domain, bus and function are available from output of the virsh nodedev-dumpxml command: <device> <name>pci_0000_01_00_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igb</name> </driver> <capability type='pci'> <domain>0</domain> <bus>1</bus> <slot>0</slot> <function>0</function> <product id='0x10c9'>82576 Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device> Figure 16.6. PCI device file contents Note If there are multiple endpoints in the IOMMU group and not all of them are assigned to the guest, you will need to manually detach the other endpoint(s) from the host by running the following command before you start the guest: See the Note in Section 16.1.1, "Assigning a PCI Device with virsh" for more information on IOMMU groups. Add the device Use the PCI identifier output from the virsh nodedev command as the value for the --host-device parameter. Complete the installation Complete the guest installation. The PCI device should be attached to the guest. 16.1.4. Detaching an Assigned PCI Device When a host PCI device has been assigned to a guest machine, the host can no longer use the device. If the PCI device is in managed mode (configured using the managed='yes' parameter in the domain XML file ), it attaches to the guest machine and detaches from the guest machine and re-attaches to the host machine as necessary. If the PCI device is not in managed mode, you can detach the PCI device from the guest machine and re-attach it using virsh or virt-manager . Procedure 16.6. Detaching a PCI device from a guest with virsh Detach the device Use the following command to detach the PCI device from the guest by removing it in the guest's XML file: Re-attach the device to the host (optional) If the device is in managed mode, skip this step. The device will be returned to the host automatically. If the device is not using managed mode, use the following command to re-attach the PCI device to the host machine: For example, to re-attach the pci_0000_01_00_0 device to the host: The device is now available for host use. Procedure 16.7. Detaching a PCI Device from a guest with virt-manager Open the virtual hardware details screen In virt-manager , double-click the virtual machine that contains the device. Select the Show virtual hardware details button to display a list of virtual hardware. Figure 16.7. The virtual hardware details button Select and remove the device Select the PCI device to be detached from the list of virtual devices in the left panel. Figure 16.8. Selecting the PCI device to be detached Click the Remove button to confirm. The device is now available for host use. 16.1.5. PCI Bridges Peripheral Component Interconnects (PCI) bridges are used to attach to devices such as network cards, modems and sound cards. Just like their physical counterparts, virtual devices can also be attached to a PCI Bridge. In the past, only 31 PCI devices could be added to any guest virtual machine. Now, when a 31st PCI device is added, a PCI bridge is automatically placed in the 31st slot, moving the additional PCI device to the PCI bridge. Each PCI bridge has 31 slots for 31 additional devices, all of which can be bridges. In this manner, over 900 devices can be available for guest virtual machines. For an example of an XML configuration for PCI bridges, see Domain XML example for PCI Bridge . Note that this configuration is set up automatically, and it is not recommended to adjust manually. 16.1.6. PCI Device Assignment Restrictions PCI device assignment (attaching PCI devices to virtual machines) requires host systems to have AMD IOMMU or Intel VT-d support to enable device assignment of PCIe devices. Red Hat Enterprise Linux 7 has limited PCI configuration space access by guest device drivers. This limitation could cause drivers that are dependent on device capabilities or features present in the extended PCI configuration space, to fail configuration. There is a limit of 32 total assigned devices per Red Hat Enterprise Linux 7 virtual machine. This translates to 32 total PCI functions, regardless of the number of PCI bridges present in the virtual machine or how those functions are combined to create multi-function slots. Platform support for interrupt remapping is required to fully isolate a guest with assigned devices from the host. Without such support, the host may be vulnerable to interrupt injection attacks from a malicious guest. In an environment where guests are trusted, the administrator may opt-in to still allow PCI device assignment using the allow_unsafe_interrupts option to the vfio_iommu_type1 module. This may either be done persistently by adding a .conf file (for example local.conf ) to /etc/modprobe.d containing the following: or dynamically using the sysfs entry to do the same:
|
[
"options vfio_iommu_type1 allow_unsafe_interrupts=1",
"echo 1 > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts",
"GRUB_CMDLINE_LINUX=\"rd.lvm.lv=vg_VolGroup00/LogVol01 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_VolGroup_1/root vconsole.keymap=us USD([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/ rhcrashkernel-param || :) rhgb quiet intel_iommu=on iommu=pt \"",
"grub2-mkconfig -o /etc/grub2.cfg",
"grub2-mkconfig -o /etc/grub2.cfg",
"lspci | grep Ethernet 00:19.0 Ethernet controller: Intel Corporation 82567LM-2 Gigabit Network Connection 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)",
"virsh nodedev-list --cap pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 pci_0000_00_10_0 pci_0000_00_10_1 pci_0000_00_14_0 pci_0000_00_14_1 pci_0000_00_14_2 pci_0000_00_14_3 pci_0000_ 00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 pci_0000_00_1c_1 pci_0000_00_1c_4 pci_0000_00_1d_0 pci_0000_00_1d_1 pci_0000_00_1d_2 pci_0000_00_1d_7 pci_0000_00_1e_0 pci_0000_00_1f_0 pci_0000_00_1f_2 pci_0000_00_1f_3 pci_0000_01_00_0 pci_0000_01_00_1 pci_0000_02_00_0 pci_0000_02_00_1 pci_0000_06_00_0 pci_0000_07_02_0 pci_0000_07_03_0",
"virsh nodedev-dumpxml pci_0000_00_19_0 <device> <name>pci_0000_00_19_0</name> <parent>computer</parent> <driver> <name>e1000e</name> </driver> <capability type='pci'> <domain>0</domain> <bus>0</bus> <slot>25</slot> <function>0</function> <product id='0x1502'>82579LM Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device>",
"ls /sys/bus/pci/devices/ 0000:01:00.0 /iommu_group/devices/",
"0000:01:00.0 0000:01:00.1",
"virsh nodedev-detach pci_0000_01_00_1",
"bus='0' slot='25' function='0'",
"virsh edit guest1-rhel7-64",
"<devices> [...] <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> </hostdev> [...] </devices>",
"virsh attach-device guest1-rhel7-64 file.xml",
"Expansion ROM at",
"<hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> <rom bar='off'/> </hostdev>",
"virsh start guest1-rhel7-64",
"lspci | grep Ethernet 00:19.0 Ethernet controller: Intel Corporation 82567LM-2 Gigabit Network Connection 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)",
"virsh nodedev-list --cap pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 pci_0000_00_10_0 pci_0000_00_10_1 pci_0000_00_14_0 pci_0000_00_14_1 pci_0000_00_14_2 pci_0000_00_14_3 pci_0000_00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 pci_0000_00_1c_1 pci_0000_00_1c_4 pci_0000_00_1d_0 pci_0000_00_1d_1 pci_0000_00_1d_2 pci_0000_00_1d_7 pci_0000_00_1e_0 pci_0000_00_1f_0 pci_0000_00_1f_2 pci_0000_00_1f_3 pci_0000_01_00_0 pci_0000_01_00_1 pci_0000_02_00_0 pci_0000_02_00_1 pci_0000_06_00_0 pci_0000_07_02_0 pci_0000_07_03_0",
"virsh nodedev-dumpxml pci_0000_01_00_0",
"<device> <name>pci_0000_01_00_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igb</name> </driver> <capability type='pci'> <domain>0</domain> <bus>1</bus> <slot>0</slot> <function>0</function> <product id='0x10c9'>82576 Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device>",
"virsh nodedev-detach pci_0000_00_19_1",
"virt-install --name=guest1-rhel7-64 --disk path=/var/lib/libvirt/images/guest1-rhel7-64.img,size=8 --vcpus=2 --ram=2048 --location=http://example1.com/installation_tree/RHEL7.0-Server-x86_64/os --nonetworks --os-type=linux --os-variant=rhel7 --host-device= pci_0000_01_00_0",
"virsh detach-device name_of_guest file.xml",
"virsh nodedev-reattach device",
"virsh nodedev-reattach pci_0000_01_00_0",
"options vfio_iommu_type1 allow_unsafe_interrupts=1",
"echo 1 > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-guest_virtual_machine_device_configuration
|
Nodes
|
Nodes OpenShift Container Platform 4.16 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi",
"oc project <project-name>",
"oc get pods",
"oc get pods",
"NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>",
"oc adm top pods",
"oc adm top pods -n openshift-console",
"NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi",
"oc adm top pod --selector=''",
"oc adm top pod --selector='name=my-pod'",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }",
"oc create -f <file_or_dir_path>",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1",
"oc create -f pod-disruption-budget.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1",
"oc create -f <file-name>.yaml",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"horizontalpodautoscaler.autoscaling/hello-node autoscaled",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0",
"oc get deployment hello-node",
"NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config",
"type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60",
"behavior: scaleDown: stabilizationWindowSeconds: 300",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled",
"oc edit hpa hpa-resource-metrics-memory",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11",
"oc create -f <file-name>.yaml",
"oc get hpa cpu-autoscale",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler",
"Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max",
"oc create -f <file-name>.yaml",
"oc create -f hpa.yaml",
"horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created",
"oc get hpa hpa-resource-metrics-memory",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m",
"oc describe hpa hpa-resource-metrics-memory",
"Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc describe hpa <pod-name>",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc get all -n openshift-vertical-pod-autoscaler",
"NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>",
"oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"",
"oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 3",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"",
"oc get pods -n openshift-vertical-pod-autoscaler -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>",
"resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi",
"resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3",
"oc get vpa <vpa-name> --output yaml",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"",
"spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M",
"apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi",
"apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi",
"apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>",
"apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true",
"oc get pods",
"NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender",
"oc create -f <file-name>.yaml",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod",
"apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"",
"oc delete namespace openshift-vertical-pod-autoscaler",
"oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io",
"oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io",
"oc delete crd verticalpodautoscalers.autoscaling.k8s.io",
"oc delete MutatingWebhookConfiguration vpa-webhook-config",
"oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB",
"apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com",
"oc create sa <service_account_name> -n <your_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3",
"oc apply -f service-account-token-secret.yaml",
"oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1",
"ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA",
"curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2",
"apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1",
"kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f <file-name>.yaml",
"oc get secrets",
"NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m",
"oc describe secret my-cert",
"Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes",
"apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testSecret",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testParameter",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers",
"oc apply -f azure-provider.yaml",
"SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"",
"SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"",
"oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}",
"oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"",
"oc create -f secret-provider-class-azure.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"secret1",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1",
"my-secret-value",
"helm repo add hashicorp https://helm.releases.hashicorp.com",
"helm repo update",
"oc new-project vault",
"oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite",
"oc adm policy add-scc-to-user privileged -z vault -n vault",
"oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault",
"helm install vault hashicorp/vault --namespace=vault --set \"server.dev.enabled=true\" --set \"injector.enabled=false\" --set \"csi.enabled=true\" --set \"global.openshift=true\" --set \"injector.agentImage.repository=docker.io/hashicorp/vault\" --set \"server.image.repository=docker.io/hashicorp/vault\" --set \"csi.image.repository=docker.io/hashicorp/vault-csi-provider\" --set \"csi.agent.image.repository=docker.io/hashicorp/vault\" --set \"csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers\"",
"oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/securityContext\", \"value\": {\"privileged\": true} }]'",
"oc get pods -n vault",
"NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s",
"oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value",
"oc exec vault-0 --namespace=vault -- vault kv get secret/example1",
"= Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value",
"oc exec vault-0 --namespace=vault -- vault auth enable kubernetes",
"Success! Enabled kubernetes auth method at: kubernetes/",
"TOKEN_REVIEWER_JWT=\"USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)\"",
"KUBERNETES_SERVICE_IP=\"USD(oc get svc kubernetes --namespace=default -o go-template=\"{{ .spec.clusterIP }}\")\"",
"oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config issuer=\"https://kubernetes.default.svc.cluster.local\" token_reviewer_jwt=\"USD{TOKEN_REVIEWER_JWT}\" kubernetes_host=\"https://USD{KUBERNETES_SERVICE_IP}:443\" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
"Success! Data written to: auth/kubernetes/config",
"oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF",
"Success! Uploaded policy: csi",
"oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi bound_service_account_names=default bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace policies=csi ttl=20m",
"Success! Data written to: auth/kubernetes/role/csi",
"oc get pods -n vault",
"NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m",
"oc get pods -n openshift-cluster-csi-drivers | grep -E \"secrets\"",
"secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: \"csi\" vaultAddress: \"http://vault.vault:8200\" objects: | - secretPath: \"secret/data/example1\" objectName: \"testSecret1\" secretKey: \"testSecret1",
"oc create -f secret-provider-class-vault.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-vault-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testSecret1",
"oc exec busybox-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1",
"my-secret-value",
"oc edit secretproviderclass my-azure-provider 1",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"",
"oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1",
"status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"oc create configmap <configmap_name> [options]",
"oc create configmap game-config --from-file=example-files/",
"oc describe configmaps game-config",
"Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config --from-file=example-files/",
"oc get configmaps game-config -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"oc get configmaps game-config-2 -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985",
"oc get configmaps game-config-3 -o yaml",
"apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985",
"oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm",
"oc get configmaps special-config -o yaml",
"apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"oc get priorityclasses",
"NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1",
"oc create -f <file-name>.yaml",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.29.4",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc get pods -n openshift-run-once-duration-override-operator",
"NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s",
"oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true",
"apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done",
"oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds",
"activeDeadlineSeconds: 3600",
"oc edit runoncedurationoverride cluster",
"apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1",
"oc delete crd scaledobjects.keda.k8s.io",
"oc delete crd triggerauthentications.keda.k8s.io",
"oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem",
"oc get all -n openshift-keda",
"NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10",
"oc project <project_name> 1",
"oc create serviceaccount thanos 1",
"apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token",
"oc create -f <file_name>.yaml",
"oc describe serviceaccount thanos 1",
"Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>",
"apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5",
"oc create -f <file-name>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7",
"apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3",
"apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD",
"oc create -f <filename>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2",
"oc apply -f <filename>",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"",
"get pod -n openshift-keda",
"NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s",
"oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1",
"oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda",
"oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda",
"sh-4.4USD cd /var/audit-policy/",
"sh-4.4USD ls",
"log-2023.02.17-14:50 policy.yaml",
"sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1",
"sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}",
"└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication",
"oc create -f <filename>.yaml",
"oc get scaledobject <scaled_object_name>",
"NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s",
"kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication",
"oc create -f <filename>.yaml",
"oc get scaledjob <scaled_job_name>",
"NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s",
"oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh",
"oc get clusterrole | grep keda.sh",
"oc delete clusterrole.keda.sh-v1alpha1-admin",
"oc get clusterrolebinding | grep keda.sh",
"oc delete clusterrolebinding.keda.sh-v1alpha1-admin",
"oc delete project openshift-keda",
"oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #",
"apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc label node node1 e2e-az-name=e2e-az1",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #",
"oc create -f <file-name>.yaml",
"oc label node node1 e2e-az-name=e2e-az3",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #",
"oc create -f <file-name>.yaml",
"oc label node node1 zone=us",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1",
"oc label node node1 zone=emea",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc describe pod pod-s1",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]",
"oc apply -f project.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.29.4",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.29.4",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.29.4",
"Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector",
"oc edit namespace <name>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.29.4",
"oc label <resource> <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.29.4",
"apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1",
"apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated",
"apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1",
"oc describe pod nginx -n default",
"Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp",
"kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #",
"oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #",
"oc adm new-project <name> --node-selector=\"\"",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #",
"oc create -f daemonset.yaml",
"oc get pods",
"hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m",
"oc describe pod/hello-daemonset-cx6md|grep Node",
"Node: openshift-node01.hostname.com/10.14.20.134",
"oc describe pod/hello-daemonset-e3md9|grep Node",
"Node: openshift-node02.hostname.com/10.14.20.137",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"oc delete cronjob/<cron_job_name>",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"oc create -f <file-name>.yaml",
"oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 timeZone: Etc/UTC 2 concurrencyPolicy: \"Replace\" 3 startingDeadlineSeconds: 200 4 suspend: true 5 successfulJobsHistoryLimit: 3 6 failedJobsHistoryLimit: 1 7 jobTemplate: 8 spec: template: metadata: labels: 9 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 10 #",
"oc create -f <file-name>.yaml",
"oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"oc get nodes",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.29.4 node1.example.com Ready worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.29.4 node1.example.com NotReady,SchedulingDisabled worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.29.4 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.29.4 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.29.4 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev",
"oc get node <node>",
"oc get node node1.example.com",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.29.4",
"oc describe node <node>",
"oc describe node node1.example.com",
"Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.29.4-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.29.4 Kube-Proxy Version: v1.29.4 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #",
"oc get pod --selector=<nodeSelector>",
"oc get pod --selector=kubernetes.io/os",
"oc get pod -l=<nodeSelector>",
"oc get pod -l kubernetes.io/os=linux",
"oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%",
"oc adm top node --selector=''",
"oc adm cordon <node1>",
"node/<node1> cordoned",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.29.4",
"oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]",
"oc adm drain <node1> <node2> --force=true",
"oc adm drain <node1> <node2> --grace-period=-1",
"oc adm drain <node1> <node2> --ignore-daemonsets=true",
"oc adm drain <node1> <node2> --timeout=5s",
"oc adm drain <node1> <node2> --delete-emptydir-data=true",
"oc adm drain <node1> <node2> --dry-run=true",
"oc adm uncordon <node1>",
"oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>",
"oc label nodes webconsole-7f7f6 unhealthy=true",
"kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #",
"oc label pods --all <key_1>=<value_1>",
"oc label pods --all status=unhealthy",
"oc adm cordon <node>",
"oc adm cordon node1.example.com",
"node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled",
"oc adm uncordon <node1>",
"oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>",
"oc get machinesets -n openshift-machine-api",
"oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api",
"oc edit machineset <machine-set-name> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get machineconfigpool --show-labels",
"NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False",
"oc label machineconfigpool worker custom-kubelet=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #",
"oc create -f <file-name>",
"oc create -f master-kube-config.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2",
"oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api",
"oc get machinesets <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All",
"oc edit schedulers.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #",
"oc create -f 99-worker-setsebool.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.29.4 ip-10-0-136-243.ec2.internal Ready master 34m v1.29.4 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.29.4 ip-10-0-142-249.ec2.internal Ready master 34m v1.29.4 ip-10-0-153-11.ec2.internal Ready worker 28m v1.29.4 ip-10-0-153-150.ec2.internal Ready master 34m v1.29.4",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"oc label machineconfigpool worker kubelet-swap=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #",
"#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"oc debug node/<node1>",
"chroot /host",
"systemctl reboot",
"ssh core@<master-node>.<cluster_name>.<base_domain>",
"sudo systemctl reboot",
"oc adm uncordon <node1>",
"ssh core@<target_node>",
"sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"oc debug node/<node_name>",
"chroot /host",
"SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #",
"oc create -f <file_name>.yaml",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #",
"oc create -f <filename>",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/kubernetes/kubelet.conf",
"\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1",
"USD(nproc) X 1/2 MiB",
"for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1",
"curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'",
"apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f myapp.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f myservice.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377",
"oc create -f mydb.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m",
"oc set volume <object_selection> <operation> <mandatory_parameters> <options>",
"oc set volume <object_type>/<name> [options]",
"oc set volume pod/p1",
"oc set volume dc --all --name=v1",
"oc set volume <object_type>/<name> --add [options]",
"oc set volume dc/registry --add",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP",
"oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'",
"oc set volume <object_type>/<name> --add --overwrite [options]",
"oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data",
"oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt",
"oc set volume <object_type>/<name> --remove [options]",
"oc set volume dc/d1 --remove --name=v1",
"oc set volume dc/d1 --remove --name=v1 --containers=c1",
"oc set volume rc/r1 --remove --confirm",
"oc rsh <pod>",
"sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3",
"apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data",
"echo -n \"admin\" | base64",
"YWRtaW4=",
"echo -n \"1f2d1e2e67df\" | base64",
"MWYyZDFlMmU2N2Rm",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=",
"oc create -f <secrets-filename>",
"oc create -f secret.yaml",
"secret \"mysecret\" created",
"oc get secret <secret-name>",
"oc get secret mysecret",
"NAME TYPE DATA AGE mysecret Opaque 2 17h",
"oc get secret <secret-name> -o yaml",
"oc get secret mysecret -o yaml",
"apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque",
"kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1",
"oc create -f <your_yaml_file>.yaml",
"oc create -f secret-pod.yaml",
"pod \"test-projected-volume\" created",
"oc get pod <name>",
"oc get pod test-projected-volume",
"NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s",
"oc exec -it <pod> <command>",
"oc exec -it test-projected-volume -- /bin/sh",
"/ # ls",
"bin home root tmp dev proc run usr etc projected-volume sys var",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never",
"oc create -f volume-pod.yaml",
"oc logs -p dapi-volume-test-pod",
"cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory",
"oc create -f pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory",
"oc create -f volume-pod.yaml",
"apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth",
"oc create -f secret.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue",
"oc create -f configmap.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"oc rsync <source> <destination> [-c <container>]",
"<pod name>:<dir>",
"oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>",
"oc rsync /home/user/source devpod1234:/src -c user-container",
"oc rsync devpod1234:/src /home/user/source",
"oc rsync devpod1234:/src/status.txt /home/user/",
"rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"export RSYNC_RSH='oc rsh'",
"rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]",
"oc exec mypod date",
"Thu Apr 9 02:21:53 UTC 2015",
"/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>",
"/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> 5000 6000",
"Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000",
"oc port-forward <pod> 8888:5000",
"Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000",
"oc port-forward <pod> :5000",
"Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000",
"oc port-forward <pod> 0:5000",
"/proxy/nodes/<node_name>/portForward/<namespace>/<pod>",
"/proxy/nodes/node123.openshift.com/portForward/myns/mypod",
"sudo sysctl -a",
"oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml",
"apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.16.0-0.nightly-2022-11-16-003434 creationTimestamp: \"2022-11-17T14:09:27Z\" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: \"2422\" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3",
"oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml",
"Please edit the object below. Lines beginning with a '#' will be ignored, and an empty file will abort the edit. If an error occurs while saving this file will be reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.rp_filter\": \"1\" } } ] }'",
"oc apply -f reverse-path-fwd-example.yaml",
"networkattachmentdefinition.k8.cni.cncf.io/tuningnad created",
"apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s",
"oc rsh example",
"sh-4.4# sysctl net.ipv4.conf.net1.rp_filter",
"net.ipv4.conf.net1.rp_filter = 1",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: [\"ALL\"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"1\" - name: net.ipv4.ip_local_port_range value: \"32770 60666\" - name: net.ipv4.tcp_syncookies value: \"0\" - name: net.ipv4.ping_group_range value: \"0 200000000\"",
"oc apply -f sysctl_pod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s",
"oc rsh sysctl-example",
"sh-4.4# sysctl kernel.shm_rmid_forced",
"kernel.shm_rmid_forced = 1",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"",
"oc apply -f sysctl-example-unsafe.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m",
"oc label machineconfigpool worker custom-kubelet=sysctl",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"",
"oc apply -f set-sysctl-worker.yaml",
"oc get machineconfigpool worker -w",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"",
"oc apply -f sysctl-example-safe-unsafe.yaml",
"Warning: would violate PodSecurity \"restricted:latest\": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created",
"oc get pod",
"NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s",
"oc rsh sysctl-example-safe-unsafe",
"sh-4.4# sysctl net.core.somaxconn",
"net.core.somaxconn = 1024",
"oc exec -ti no-priv -- /bin/bash",
"cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF",
"podman build .",
"io.kubernetes.cri-o.Devices: \"/dev/fuse\"",
"apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: \"/dev/fuse\"",
"spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - \"1000000\" securityContext: runAsUser: 1000",
"oc get events [-n <project>] 1",
"oc get events -n openshift-config",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod-spec.yaml",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ose-cluster-capacity",
"podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]",
"oc create -f <file_name>.yaml",
"oc create sa cluster-capacity-sa",
"oc create sa cluster-capacity-sa -n default",
"oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa",
"apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file_name>.yaml",
"oc create -f pod.yaml",
"oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml",
"apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap",
"oc create -f cluster-capacity-job.yaml",
"oc logs jobs/cluster-capacity-job",
"small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3",
"apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"",
"oc create -f <limit_range_file> -n <project> 1",
"oc get limits -n demoproject",
"NAME CREATED AT resource-limits 2020-07-15T17:14:23Z",
"oc describe limits resource-limits -n demoproject",
"Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -",
"oc delete limits <limit_name>",
"-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.",
"JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"",
"apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <file-name>.yaml",
"oc rsh test",
"env | grep MEMORY | sort",
"MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184",
"oc rsh test",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 0",
"sed -e '' </dev/zero",
"Killed",
"echo USD?",
"137",
"grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control",
"oom_kill 1",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed",
"oc get pod test -o yaml",
"status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running",
"oc get pod test",
"NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m",
"oc get pod test -o yaml",
"status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s",
"oc describe mc <name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd_unified_cgroup_hierarchy=1 1 cgroup_no_v1=\"all\" 2 psi=0",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.29.4 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.29.4 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.29.4 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.29.4 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.29.4 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.29.4",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"stat -c %T -f /sys/fs/cgroup",
"cgroup2fs",
"tmpfs",
"compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1",
"oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5",
"- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"",
"tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule",
"kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - \"10s\" node-status-report-frequency: 3 - \"1m\"",
"tolerations: - key: \"node.kubernetes.io/unreachable\" operator: \"Exists\" effect: \"NoExecute\" 1 - key: \"node.kubernetes.io/not-ready\" operator: \"Exists\" effect: \"NoExecute\" 2 tolerationSeconds: 600 3",
"export OFFLINE_TOKEN=<copied_api_token>",
"export JWT_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )",
"curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq",
"{ \"release_tag\": \"v2.5.1\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:ac87f93\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156\" } }",
"export API_URL=<api_url> 1",
"export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDOPENSHIFT_CLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDopenshift_cluster_id, \"name\": \"<openshift_cluster_name>\" 2 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{JWT_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.29.4 compute-1.example.com Ready worker 11m v1.29.4",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000",
"nmcli con up <network_interface>",
"{ \"ignition\":{ \"version\":\"3.2.0\", \"config\":{ \"merge\":[ { \"source\":\"<hosted_worker_ign_file>\" 1 } ] } }, \"storage\":{ \"files\":[ { \"path\":\"/etc/hostname\", \"contents\":{ \"source\":\"data:,<new_fqdn>\" 2 }, \"mode\":420, \"overwrite\":true, \"path\":\"/etc/hostname\" } ] } }",
"sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition",
"coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk>",
"apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"eth0\" macAddress: \"AA:BB:CC:DD:EE:11\"",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.29.4 compute-1.example.com Ready worker 11m v1.29.4",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"topk(3, sum(increase(container_runtime_crio_containers_oom_count_total[1d])) by (name))",
"rate(container_runtime_crio_image_pulls_failure_total[1h]) / (rate(container_runtime_crio_image_pulls_success_total[1h]) + rate(container_runtime_crio_image_pulls_failure_total[1h]))",
"sum by (node) (container_memory_rss{id=\"/system.slice\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 80",
"sum by (node) (container_memory_rss{id=\"/system.slice/kubelet.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50",
"sum by (node) (container_memory_rss{id=\"/system.slice/crio.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50",
"sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 80",
"sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/kubelet.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50",
"sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/crio.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/nodes/index
|
Chapter 16. Configuring a multi-site, fault-tolerant messaging system using broker connections
|
Chapter 16. Configuring a multi-site, fault-tolerant messaging system using broker connections Large-scale enterprise messaging systems commonly have discrete broker clusters located in geographically distributed data centers. In the event of a data center outage, system administrators might need to preserve existing messaging data and ensure that client applications can continue to produce and consume messages. You can use broker connections to ensure continuity of your messaging system during a data center outage. This type of solution is called a multi-site, fault-tolerant architecture . Note Only the AMQP protocol is supported for communication between brokers for broker connections. A client can use any supported protocol. Currently, messages are converted to AMQP through the mirroring process. The following sections explain how to protect your messaging system from data center outages using broker connections: Section 16.1, "About broker connections" Section 16.2, "Configuring broker mirroring" Note Multi-site fault tolerance is not a replacement for high-availability (HA) broker redundancy within data centers. Broker redundancy based on live-backup groups provides automatic protection against single broker failures within single clusters. In contrast, multi-site fault tolerance protects against large-scale data center outages. 16.1. About broker connections With broker connections, a broker can establish a connection to another broker and mirror messages to and from that broker. AMQP server connections A broker can initiate connections to other endpoints using the AMQP protocol using broker connections. This means, for example, that the broker can connect to other AMQP servers and create elements on those connections. The following types of operations are supported on an AMQP server connection: Mirrors - The broker uses an AMQP connection to another broker and duplicates messages and sends acknowledgements over the wire. Senders - Messages received on specific queues are transferred to another broker. Receivers - The broker pulls messages from another broker. Peers - The broker creates both senders and receivers on AMQ Interconnect endpoints. This chapter describes how to use broker connections to create a fault-tolerant system. See Chapter 17, Bridging brokers for information about sender, receiver, and peer options. The following events are sent through mirroring: Message sending - Messages sent to one broker will be "replicated" to the target broker. Message acknowledgement - Acknowledgements removing messages at one broker will be sent to the target broker. Queue and address creation. Queue and address deletion. Note If the message is pending for a consumer on the target mirror, the acknowledgement will not succeed and the message might be delivered by both brokers. Mirroring does not block any operation and does not affect the performance of a broker. The broker only mirrors messages arriving from the point in time the mirror was configured. Previously existing messages will not be forwarded to other brokers. 16.2. Configuring broker mirroring You can use broker connections to mirror messages between a pair of brokers. Only one of the brokers can be active at any time. Prerequisites You have two working brokers. Procedure Create a broker-connections element in the broker.xml file for the first broker, for example: <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC1"> <mirror/> </amqp-connection> </broker-connections> <hostname> The hostname of the other broker instance. <port> The port used by the broker on the other host. All messages on the first broker are mirrored to the second broker, but messages that existed before the mirror was created are not mirrored. If you want the first broker to mirror messages synchronously to ensure that the mirrored broker is up-to-date for disaster recovery, set the sync=true attribute in the amqp-connection element of the broker, as shown in the following example. Synchronous mirroring requires that messages sent by a broker to a mirrored broker are written to the volumes of both brokers at the same time. Once the write operation is complete on both brokers, the source broker acknowledges that the write request is complete and control is returned to clients. <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC2"> <mirror sync="true"/> </amqp-connection> </broker-connections> Note If the write request cannot be completed on the mirrored broker, for example, if the broker is unavailable, client connections are blocked until a mirror is available to complete the most recent write request. Note The broker connections name in the example, DC1 , is used to create a queue named USDACTIVEMQ_ARTEMIS_MIRROR_mirror . Make sure that the corresponding broker is configured to accept those messages, even though the queue is not visible on that broker. Create a broker-connections element in the broker.xml file for the second broker, for example: <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC2"> <mirror/> </amqp-connection> </broker-connections> If you want the second broker to mirror messages synchronously, set the sync=true attribute in the amqp-connection element of the broker. For example: <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC2"> <mirror sync="true"/> </amqp-connection> </broker-connections> (Optional) Configure the following parameters for the mirror, as required. queue-removal Specifies whether either queue or address removal events are sent. The default value is true . message-acknowledgments Specifies whether message acknowledgments are sent. The default value is true . queue-creation Specifies whether either queue or address creation events are sent. The default value is true . For example: <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC2"> <mirror sync="true" queue-removal="false" message-acknowledgments ="false" queue-creation="false"/> </amqp-connection> </broker-connections> (Optional) Customize the broker retry attempts to acknowledge messages on the target mirror. An acknowledgment might be received on a target mirror for a message that is not in the queue memory. To give the broker sufficient time to retry acknowledging the message on the target mirror, you can customize the following parameters for your environment: mirrorAckManagerQueueAttempts The number of attempts the broker makes to find a message in memory. The default value is 5 . If the broker does not find the message in memory after the specified number of attempts, the broker searches for the message in page files. mirrorAckManagerPageAttempts The number of attempts the broker makes to find a message in page files if the message was not found in memory. The default value is 2 . mirrorAckManagerRetryDelay The interval, in milliseconds, between attempts the broker makes to find a message to acknowledge in memory and then in page files. Specify any of these parameters outside of the broker-connections element. For example: <mirrorAckManagerQueueAttempts>8</mirrorAckManagerQueueAttempts> <broker-connections> <amqp-connection uri="tcp://<hostname>:<port>" name="DC2"> <mirror/> </amqp-connection> </broker-connections> (Optional) If messages are paged on the target mirror, set the mirrorPageTransaction to true if you want the broker to coordinate writing duplicate detection information with writing messages to page files. If the mirrorPageTransaction attribute is set to false , which is the default, and a communication failure occurs between the brokers, a duplicate message can, in rare circumstances, be written to the target mirror. Setting this parameter to true increases the broker's memory usage. Configure clients using the instructions documented in Section 15.6, "Configuring clients in a multi-site, fault-tolerant messaging system" , noting that with broker connections, there is no shared storage. Important Red Hat does not support client applications consuming messages from both brokers in a mirror configuration. To prevent clients from consuming messages on both brokers, disable the client acceptors on one of the brokers.
|
[
"<broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC1\"> <mirror/> </amqp-connection> </broker-connections>",
"<broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC2\"> <mirror sync=\"true\"/> </amqp-connection> </broker-connections>",
"<broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC2\"> <mirror/> </amqp-connection> </broker-connections>",
"<broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC2\"> <mirror sync=\"true\"/> </amqp-connection> </broker-connections>",
"<broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC2\"> <mirror sync=\"true\" queue-removal=\"false\" message-acknowledgments =\"false\" queue-creation=\"false\"/> </amqp-connection> </broker-connections>",
"<mirrorAckManagerQueueAttempts>8</mirrorAckManagerQueueAttempts> <broker-connections> <amqp-connection uri=\"tcp://<hostname>:<port>\" name=\"DC2\"> <mirror/> </amqp-connection> </broker-connections>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/configuring-fault-tolerant-system-broker-connections-configuring
|
14.2.3. Requiring SSH for Remote Connections
|
14.2.3. Requiring SSH for Remote Connections For SSH to be truly effective, using insecure connection protocols should be prohibited. Otherwise, a user's password may be protected using SSH for one session, only to be captured later while logging in using Telnet. Some services to disable include telnet , rsh , rlogin , and vsftpd . To disable these services, type the following commands at a shell prompt: For more information on runlevels and configuring services in general, see Chapter 12, Services and Daemons .
|
[
"~]# chkconfig telnet off ~]# chkconfig rsh off ~]# chkconfig rlogin off ~]# chkconfig vsftpd off"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-configuration-requiring
|
Chapter 3. Getting support
|
Chapter 3. Getting support Windows Container Support for Red Hat OpenShift is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements . You must have this separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal . For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers . If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, a distribution that lacks official support.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/windows_container_support_for_openshift/windows-containers-support
|
Chapter 7. Managing content views
|
Chapter 7. Managing content views Red Hat Satellite uses content views to allow your hosts access to a deliberately curated subset of content. To do this, you must define which repositories to use and then apply certain filters to the content. The general workflow for creating content views for filtering and creating snapshots is as follows: Create a content view. Add one or more repositories that you want to the content view. Optional: Create one or more filters to refine the content of the content view. For more information, see Section 7.13, "Content filter examples" . Optional: Resolve any package dependencies for a content view. For more information, see Section 7.11, "Resolving package dependencies" . Publish the content view. Optional: Promote the content view to another environment. For more information, see Section 7.7, "Promoting a content view" . Attach the content host to the content view. If a repository is not associated with the content view, the file /etc/yum.repos.d/redhat.repo remains empty and systems registered to it cannot receive updates. Hosts can only be associated with a single content view. To associate a host with multiple content views, create a composite content view. For more information, see Section 7.9, "Creating a composite content view" . 7.1. Content views in Red Hat Satellite A content view is a deliberately curated subset of content that your hosts can access. By creating a content view, you can define the software versions used by a particular environment or Capsule Server. Each content view creates a set of repositories across each environment. Your Satellite Server stores and manages these repositories. For example, you can create content views in the following ways: A content view with older package versions for a production environment and another content view with newer package versions for a Development environment. A content view with a package repository required by an operating system and another content view with a package repository required by an application. A composite content view for a modular approach to managing content views. For example, you can use one content view for content for managing an operating system and another content view for content for managing an application. By creating a composite content view that combines both content views, you create a new repository that merges the repositories from each of the content views. However, the repositories for the content views still exist and you can keep managing them separately as well. Default Organization View A Default Organization View is an application-controlled content view for all content that is synchronized to Satellite. You can register a host to the Library environment on Satellite to consume the Default Organization View without configuring content views and lifecycle environments. Promoting a content view across environments When you promote a content view from one environment to the environment in the application lifecycle, Satellite updates the repository and publishes the packages. Example 7.1. Promoting a package from Development to Testing The repositories for Testing and Production contain the my-software -1.0-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 1 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm my-software -1.0-0.noarch.rpm If you promote Version 2 of the content view from Development to Testing , the repository for Testing updates to contain the my-software -1.1-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 2 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm This ensures hosts are designated to a specific environment but receive updates when that environment uses a new version of the content view. 7.2. Best practices for content views Content views that bundle content, such as Red Hat Enterprise Linux and additional software like Apache-2.4 or PostgreSQL-16.2 , are easier to maintain. Content views that are too small require more maintenance." If you require daily updated content, use the content view Default Organization View , which contains the latest synchronized content from all repositories and is available in the Library lifecycle environment. Restrict composite content views to situations that require greater flexibility, for example, if you update one content view on a weekly basis and another content view on a monthly basis. If you use composite content views, first publish the content views and then publish the composite content views. The more content views you bundle into composite content views, the more effort is needed to change or update content. Setting a lifecycle environment for content views is unnecessary if they are solely bundled to a composite content view. Automate creating and publishing composite content views and lifecycle environments by using a Hammer script or an Ansible playbook . Use cron jobs, systemd timers, or recurring logics for more visibility. Add the changes and date to the description of each published content view or composite content view version. The most recent activity, such as moving content to a new lifecycle environment, is displayed by date in the Satellite web UI, regardless of the latest changes to the content itself. Publishing a new content view or composite content view creates a new major version. Incremental errata updates increment the minor version. Note that you cannot change or reset this counter. 7.3. Best practices for patching content hosts Registering hosts to Satellite requires Red Hat Satellite Client 6, which contains the subscription-manager package, katello-host-tools package, and their dependencies. For more information, see Registering hosts in Managing hosts . Use the Satellite web UI to install, upgrade, and remove packages from hosts. You can update content hosts with job templates using SSH and Ansible. Apply errata on content hosts using the Satellite web UI. When patching packages on hosts using the default package manager, Satellite receives a list of packages and repositories to recalculate applicable errata and available updates. Modify or replace job templates to add custom steps. This allows you to run commands or execute scripts on hosts. When running bulk actions on hosts, bundle them by major operating system version, especially when upgrading packages. Select via remote execution - customize first to define the time when patches are applied to hosts when performing bulk actions. You cannot apply errata to packages that are not part of the repositories on Satellite and the attached content view. Modifications to installed packages using rpm or dpkg are sent to Satellite with the run of apt , yum , or zypper . 7.4. Creating a content view Use this procedure to create a simple content view. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites While you can stipulate whether you want to resolve any package dependencies on a content view by content view basis, you might want to change the default Satellite settings to enable or disable package resolution for all content views. For more information, see Section 7.11, "Resolving package dependencies" . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Click Create content view . In the Name field, enter a name for the view. Satellite automatically completes the Label field from the name you enter. In the Description field, enter a description of the view. In the Type field, select a Content view or a Composite content view . Optional: If you want to solve dependencies automatically every time you publish this content view, select the Solve dependencies checkbox. Dependency solving slows the publishing time and might ignore any content view filters you use. This can also cause errors when resolving dependencies for errata. Click Create content view . Content view steps Click Create content view to create the content view. In the Repositories tab, select the repository from the Type list that you want to add to your content view, select the checkbox to the available repositories you want to add, then click Add repositories . Click Publish new version and in the Description field, enter information about the version to log changes. Optional: You can enable a promotion path by clicking Promote to Select a lifecycle environment from the available promotion paths to promote new version . Click . On the Review page, you can review the environments you are trying to publish. Click Finish . You can view the content view on the Content Views page. To view more information about the content view, click the content view name. To register a host to your content view, see Registering Hosts in Managing hosts . CLI procedure Obtain a list of repository IDs: Create the content view and add repositories: For the --repository-ids option, you can find the IDs in the output of the hammer repository list command. Publish the view: Optional: To add a repository to an existing content view, enter the following command: Satellite Server creates the new version of the view and publishes it to the Library environment. 7.5. Copying a content view You can copy a content view in the Satellite web UI or you can use the Hammer CLI to copy an existing content view into a new content view. To use the CLI instead of the Satellite web UI, see the CLI procedure . Note A copied content view does not have the same history as the original content view. Version 1 of the copied content view begins at the last version of the original content view. As a result, you cannot promote an older version of a content view from the copied content view. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select the content view you want to copy. Click the vertical ellipsis icon and click Copy . In the Name field, enter a name for the new content view and click Copy content view . Verification The copied content view appears on the Content views page. CLI procedure Copy the content view by using Hammer: Verification The Hammer command reports: 7.6. Viewing module streams In Satellite, you can view the module streams of the repositories in your content views. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to a published version of a Content View > Module Streams to view the module streams that are available for the Content Types. Use the Search field to search for specific modules. To view the information about the module, click the module and its corresponding tabs to include Details , Repositories , Profiles , and Artifacts . CLI procedure List all organizations: View all module streams for your organization: 7.7. Promoting a content view Use this procedure to promote content views across different lifecycle environments. To use the CLI instead of the Satellite web UI, see the CLI procedure . Permission requirements for content view promotion Non-administrator users require two permissions to promote a content view to an environment: promote_or_remove_content_views promote_or_remove_content_views_to_environment . The promote_or_remove_content_views permission restricts which content views a user can promote. The promote_or_remove_content_views_to_environment permission restricts the environments to which a user can promote content views. With these permissions you can assign users permissions to promote certain content views to certain environments, but not to other environments. For example, you can limit a user so that they are permitted to promote to test environments, but not to production environments. You must assign both permissions to a user to allow them to promote content views. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select the content view that you want to promote. Select the version that you want to promote, click the vertical ellipsis icon, and click Promote . Select the environment where you want to promote the content view and click Promote . Now the repository for the content view appears in all environments. CLI procedure Promote the content view using Hammer for each lifecycle environment: Now the database content is available in all environments. Alternatively, you can promote content views across all lifecycle environments within an organization using the following Bash script: ORG=" My_Organization " CVV_ID= My_Content_View_Version_ID for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done Verification Display information about your content view version to verify that it is promoted to the required lifecycle environments: steps To register a host to your content view, see Registering Hosts in Managing hosts . 7.8. Composite content views overview A composite content view combines the content from several content views. For example, you might have separate content views to manage an operating system and an application individually. You can use a composite content view to merge the contents of both content views into a new repository. The repositories for the original content views still exist but a new repository also exists for the combined content. If you want to develop an application that supports different database servers. The example_application appears as: example_software Application Database Operating System Example of four separate content views: Red Hat Enterprise Linux (Operating System) PostgreSQL (Database) MariaDB (Database) example_software (Application) From the content views, you can create two composite content views. Example composite content view for a PostgreSQL database: Composite content view 1 - example_software on PostgreSQL example_software (Application) PostgreSQL (Database) Red Hat Enterprise Linux (Operating System) Example composite content view for a MariaDB: Composite content view 2 - example_software on MariaDB example_software (Application) MariaDB (Database) Red Hat Enterprise Linux (Operating System) Each content view is then managed and published separately. When you create a version of the application, you publish a new version of the composite content views. You can also select the Auto Publish option when creating a composite content view, and then the composite content view is automatically republished when a content view it includes is republished. Repository restrictions Docker repositories cannot be included more than once in a composite content view. For example, if you attempt to include two content views using the same docker repository in a composite content view, Satellite Server reports an error. 7.9. Creating a composite content view Use this procedure to create a composite content view. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Click Create content view . In the Create content view window, enter a name for the view in the Name field. Red Hat Satellite automatically completes the Label field from the name you enter. Optional: In the Description field, enter a description of the view. On the Type tab, select Composite content view . Optional: If you want to automatically publish a new version of the composite content view when a content view is republished, select the Auto publish checkbox. Click Create content view . On the Content views tab, select the content views that you want to add to the composite content view, and then click Add content views . In the Add content views window, select the version of each content view. Optional: If you want to automatically update the content view to the latest version, select the Always update to latest version checkbox. Click Add , then click Publish new version . Optional: In the Description field, enter a description of the content view. In the Publish window, set the Promote switch, then select the lifecycle environment. Click , then click Finish . CLI procedure Before you create the composite content views, list the version IDs for your existing content views: Create a new composite content view. When the --auto-publish option is set to yes , the composite content view is automatically republished when a content view it includes is republished: Add a content view to the composite content view. You can identify content view, content view version, and Organization in the commands by either their ID or their name. To add multiple content views to the composite content view, repeat this step for every content view you want to include. If you have the Always update to latest version option enabled for the content view: If you have the Always update to latest version option disabled for the content view: Publish the composite content view: Promote the composite content view across all environments: 7.10. Content filter overview Content views also use filters to include or restrict certain Yum content. Without these filters, a content view includes everything from the selected repositories. There are two types of content filters: Table 7.1. Filter types Filter Type Description Include You start with no content, then select which content to add from the selected repositories. Use this filter to combine multiple content items. Exclude You start with all content from selected repositories, then select which content to remove. Use this filter when you want to use most of a particular content repository while excluding certain packages. The filter uses all content in the repository except for the content you select. Include and Exclude filter combinations If using a combination of Include and Exclude filters, publishing a content view triggers the include filters first, then the exclude filters. In this situation, select which content to include, then which content to exclude from the inclusive subset. Content types You can filter content based on the following content types: Table 7.2. Content types Content Type Description RPM Filter packages based on their name and version number. The RPM option filters non-modular RPM packages and errata. Source RPMs are not affected by this filter and will still be available in the content view. Package Group Filter packages based on package groups. The list of package groups is based on the repositories added to the content view. Erratum (by ID) Select which specific errata to add to the filter. The list of Errata is based on the repositories added to the content view. Erratum (by Date and Type) Select a issued or updated date range and errata type (Bugfix, Enhancement, or Security) to add to the filter. Module Streams Select whether to include or exclude specific module streams. The Module Streams option filters modular RPMs and errata, but does not filter non-modular content that is associated with the selected module stream. Container Image Tag Select whether to include or exclude specific container image tags. 7.11. Resolving package dependencies Satellite can add dependencies of packages in a content view to the dependent repository when publishing the content view. To configure this, you can enable dependency solving . For example, dependency solving is useful when you incrementally add a single package to a content view version. You might need to enable dependency solving to install that package. However, dependency solving is unnecessary in most situations. For example: When incrementally adding a security errata to a content view, dependency solving can cause significant delays to content view publication without major benefits. Packages from a newer erratum might have dependencies that are incompatible with packages from an older content view version. Incrementally adding the erratum using dependency solving might include unwanted packages. As an alternative, consider updating the content view. Note Dependency solving only considers packages within the repositories of the content view. It does not consider packages installed on clients. For example, if a content view includes only AppStream, dependency solving does not include dependent BaseOS content at publish time. For more information, see Limitations to Repository Dependency Resolution in Managing content . Dependency solving can lead to the following problems: Significant delay in content view publication Satellite examines every repository in a content view for dependencies. Therefore, publish time increases with more repositories. To mitigate this problem, use multiple content views with fewer repositories and combine them into composite content views. Ignored content view filters on dependent packages Satellite prioritizes resolving package dependencies over the rules in your filter. For example, if you create a filter for security purposes but enable dependency solving, Satellite can add packages that you might consider insecure. To mitigate this problem, carefully test filtering rules to determine the required dependencies. If dependency solving includes unwanted packages, manually identify the core basic dependencies that the extra packages and errata need. Example 7.2. Combining exclusion filters with dependency solving You want to recreate Red Hat Enterprise Linux 8.3 using content view filters and include selected errata from a later Red Hat Enterprise Linux 8 minor release. To achieve this, you create filters to exclude most of the errata after the Red Hat Enterprise Linux 8.3 release date, except a few that you need. Then, you enable dependency solving. In this situation, dependency solving might include more packages than expected. As a result, the host diverges from being a Red Hat Enterprise Linux 8.3 machine. If you do not need the extra errata and packages, do not configure content view filtering. Instead, enable and use the Red Hat Enterprise Linux 8.3 repository on the Content > Red Hat Repositories page in the Satellite web UI. Example 7.3. Excluding packages sometimes makes dependency solving impossible for DNF If you make a Red Hat Enterprise Linux 8.3 repository with a few excluded packages, dnf upgrade can sometimes fail. Do not enable dependency solving to resolve the problem. Instead, investigate the error from dnf and adjust the filters to stop excluding the missing dependency. Else, dependency solving might cause the repository to diverge from Red Hat Enterprise Linux 8.3. 7.12. Enabling dependency solving for a content view Use this procedure to enable dependency solving for a content view. Prerequisites Dependency solving is useful only in limited contexts. Before enabling it, ensure you read and understand Section 7.11, "Resolving package dependencies" Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . From the list of content views, select the required content view. On the Details tab, toggle Solve dependencies . 7.13. Content filter examples Use any of the following examples with the procedure that follows to build custom content filters. Note Filters can significantly increase the time to publish a content view. For example, if a content view publish task completes in a few minutes without filters, it can take 30 minutes after adding an exclude or include errata filter. Example 1 Create a repository with the base Red Hat Enterprise Linux packages. This filter requires a Red Hat Enterprise Linux repository added to the content view. Filter: Inclusion Type: Include Content Type: Package Group Filter: Select only the Base package group Example 2 Create a repository that excludes all errata, except for security updates, after a certain date. This is useful if you want to perform system updates on a regular basis with the exception of critical security updates, which must be applied immediately. This filter requires a Red Hat Enterprise Linux repository added to the content view. Filter: Inclusion Type: Exclude Content Type: Erratum (by Date and Type) Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On . Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered. Example 3 A combination of Example 1 and Example 2 where you only require the operating system packages and want to exclude recent bug fix and enhancement errata. This requires two filters attached to the same content view. The content view processes the Include filter first, then the Exclude filter. Filter 1: Inclusion Type: Include Content Type: Package Group Filter: Select only the Base package group Filter 2: Inclusion Type: Exclude Content Type: Erratum (by Date and Type) Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On . Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered. Example 4 Filter a specific module stream in a content view. Filter 1: Inclusion Type: Include Content Type: Module Stream Filter: Select only the specific module stream that you want for the content view, for example ant , and click Add Module Stream . Filter 2: Inclusion Type: Exclude Content Type: Package Filter: Add a rule to filter any non-modular packages that you want to exclude from the content view. If you do not filter the packages, the content view filter includes all non-modular packages associated with the module stream ant . Add a rule to exclude all * packages, or specify the package names that you want to exclude. For another example of how content filters work, see the following article: "How do content filters work in Satellite 6" . 7.14. Creating a content filter for Yum content You can filter content views containing Yum content to include or exclude specific packages, package groups, errata, or module streams. Filters are based on a combination of the name , version , and architecture . To use the CLI instead of the Satellite web UI, see the CLI procedure . For examples of how to build a filter, see Section 7.13, "Content filter examples" . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view. On the Filters tab, click Create filter . Enter a name. From the Content type list, select a content type. From the Inclusion Type list, select either Include filter or Exclude filter . Optional: In the Description field, enter a description for the filter. Click Create filter to create your content filter. Depending on what you enter for Content Type , add rules to create the filter that you want. Select if you want the filter to Apply to subset of repositories or Apply to all repositories . Click Publish New Version to publish the filtered repository. Optional: In the Description field, enter a description of the changes. Click Create filter to publish a new version of the content view. You can promote this content view across all environments. CLI procedure Add a filter to the content view. Use the --inclusion false option to set the filter to an Exclude filter: Add a rule to the filter: Publish the content view: Promote the view across all environments: 7.15. Deleting multiple content view versions You can delete multiple content view versions simultaneously. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select the content view you want to delete versions of. On the Versions tab, select the checkbox of the version or versions you want to delete. Click the vertical ellipsis icon at the top of the list of content views. Click Delete to open the deletion wizard that shows any affected environments. If there are no affected environments, review the details and click Delete . If there are any affected environments, reassign any hosts or activation keys before deletion. Review the details of the actions. Click Delete . 7.16. Clearing the search filter If you search for specific content types using keywords in the Search text box and the search returns no results, click Clear search to clear all the search queries and reset the Search text box. If you use a filter to search for specific repositories in the Type text box and the search returns no results, click Clear filters to clear all active filters and reset the Type text box. 7.17. Standardizing content view empty states If there are no filters listed for a content view, click Create filter . A modal opens to show you the steps to create a filter. Follow these steps to add a new filter to create new content types. 7.18. Comparing content view versions Use this procedure to compare content view version functionality for Satellite. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view whose versions you want to compare. On the Versions tab, select the checkbox to any two versions you want to compare. Click Compare . The Compare screen has the pre-selected versions in the version dropdown menus and tabs for all content types found in either version. You can filter the results to show only the same, different, or all content types. You can compare different content view versions by selecting them from the dropdown menus. 7.19. Distributing archived content view versions The setting Distribute archived content view versions enables hosting of non-promoted content view version repositories in the Satellite content web application along with other repositories. This is useful while debugging to see what content is present in your content view versions. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Set the Distribute archived content view versions parameter to Yes . Click Submit . This enables the repositories of content view versions without lifecycle environments to be distributed at satellite.example.com/pulp/content/ My_Organization /content_views/ My_Content_View / My_Content_View_Version / . Note Older non-promoted content view versions are not distributed once the setting is enabled. Only new content view versions become distributed.
|
[
"hammer repository list --organization \" My_Organization \"",
"hammer content-view create --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \" --repository-ids 1,2",
"hammer content-view publish --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \"",
"hammer content-view add-repository --name \" My_Content_View \" --organization \" My_Organization \" --repository-id repository_ID",
"hammer content-view copy --name My_original_CV_name --new-name My_new_CV_name",
"hammer content-view copy --id=5 --new-name=\"mixed_copy\" Content view copied.",
"hammer organization list",
"hammer module-stream list --organization-id My_Organization_ID",
"hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"",
"ORG=\" My_Organization \" CVV_ID= My_Content_View_Version_ID for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done",
"hammer content-view version info --id My_Content_View_Version_ID",
"hammer content-view version list --organization \" My_Organization \"",
"hammer content-view create --composite --auto-publish yes --name \" Example_Composite_Content_View \" --description \"Example composite content view\" --organization \" My_Organization \"",
"hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --latest --organization \" My_Organization \"",
"hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --component-content-view-version-id Content_View_Version_ID --organization \" My_Organization \"",
"hammer content-view publish --name \" Example_Composite_Content_View \" --description \"Initial version of composite content view\" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"",
"hammer content-view filter create --name \" Errata Filter \" --type erratum --content-view \" Example_Content_View \" --description \" My latest filter \" --inclusion false --organization \" My_Organization \"",
"hammer content-view filter rule create --content-view \" Example_Content_View \" --content-view-filter \" Errata Filter \" --start-date \" YYYY-MM-DD \" --types enhancement,bugfix --date-type updated --organization \" My_Organization \"",
"hammer content-view publish --name \" Example_Content_View \" --description \"Adding errata filter\" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \""
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/managing_content_views_content-management
|
Chapter 8. Event [v1]
|
Chapter 8. Event [v1] Description Event is a report of an event somewhere in the cluster. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object Required metadata involvedObject 8.1. Specification Property Type Description action string What action was taken/failed regarding to the Regarding object. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources count integer The number of times this event has occurred. eventTime MicroTime Time when this Event was first observed. firstTimestamp Time The time at which the event was first recorded. (Time of server receipt is in TypeMeta.) involvedObject object ObjectReference contains enough information to let you inspect or modify the referred object. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lastTimestamp Time The time at which the most recent occurrence of this event was recorded. message string A human-readable description of the status of this operation. metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata reason string This should be a short, machine understandable string that gives the reason for the transition into the object's current status. related object ObjectReference contains enough information to let you inspect or modify the referred object. reportingComponent string Name of the controller that emitted this Event, e.g. kubernetes.io/kubelet . reportingInstance string ID of the controller instance, e.g. kubelet-xyzf . series object EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. source object EventSource contains information for an event. type string Type of this event (Normal, Warning), new types could be added in the future 8.1.1. .involvedObject Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.2. .related Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.3. .series Description EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. Type object Property Type Description count integer Number of occurrences in this series up to the last heartbeat time lastObservedTime MicroTime Time of the last occurrence observed 8.1.4. .source Description EventSource contains information for an event. Type object Property Type Description component string Component from which the event is generated. host string Node name on which the event is generated. 8.2. API endpoints The following API endpoints are available: /api/v1/events GET : list or watch objects of kind Event /api/v1/watch/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/events DELETE : delete collection of Event GET : list or watch objects of kind Event POST : create an Event /api/v1/watch/namespaces/{namespace}/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/events/{name} DELETE : delete an Event GET : read the specified Event PATCH : partially update the specified Event PUT : replace the specified Event /api/v1/watch/namespaces/{namespace}/events/{name} GET : watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 8.2.1. /api/v1/events Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Event Table 8.2. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty 8.2.2. /api/v1/watch/events Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /api/v1/namespaces/{namespace}/events Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Event Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Event Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty HTTP method POST Description create an Event Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Event schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 202 - Accepted Event schema 401 - Unauthorized Empty 8.2.4. /api/v1/watch/namespaces/{namespace}/events Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /api/v1/namespaces/{namespace}/events/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the Event namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Event Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Event Table 8.23. HTTP responses HTTP code Reponse body 200 - OK Event schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Event Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Event Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body Event schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty 8.2.6. /api/v1/watch/namespaces/{namespace}/events/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the Event namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/metadata_apis/event-v1
|
7.218. tuna
|
7.218. tuna 7.218.1. RHBA-2015:1261 - tuna bug fix update An updated tuna package that fixes one bug is now available for Red Hat Enterprise Linux 6. The tuna package provides an interface for changing both scheduler and IRQ tunables at whole-CPU, per-thread, or per-IRQ levels. Tuna allows CPUs to be isolated for use by a specific application and threads and interrupts to be moved to a CPU simply by dragging and dropping them. Bug Fix BZ# 914366 In Red Hat Enterprise 6.5, the oscilloscope utility was generated successfully, but MRG Realtime was unable to install it. With this update, a specific version of tuna is no longer required, and oscilloscope is thus now installed as expected. Users of tuna are advised to upgrade to this updated package, which fixes this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-tuna
|
Chapter 1. Identity Management Overview
|
Chapter 1. Identity Management Overview The basic identity management concepts for securing applications with various identity stores are covered in the Red Hat JBoss Enterprise Application Platform (JBoss EAP) Security Architecture guide . This guide shows you how to configure various identity stores, such as a filesystem or LDAP, to secure applications. In some cases you can also use certain identity stores, such as LDAP, as an authorization authority. Various role and access information about principals can be stored in an LDAP directory which can then be used directly by JBoss EAP or mapped to existing JBoss EAP roles. Note Using identity stores backed by external datastores, such as databases or LDAP directories, can have a performance impact on authentication and authorization due to the data access and transport between the external datastore and the JBoss EAP instance.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_identity_management/identity_management_overview
|
Chapter 3. Configuring HTTPS
|
Chapter 3. Configuring HTTPS Abstract This chapter describes how to configure HTTPS endpoints. 3.1. Authentication Alternatives 3.1.1. Target-Only Authentication Overview When an application is configured for target-only authentication, the target authenticates itself to the client but the client is not authentic to the target object, as shown in Figure 3.1, "Target Authentication Only" . Figure 3.1. Target Authentication Only Security handshake Prior to running the application, the client and server should be set up as follows: A certificate chain is associated with the server. The certificate chain is provided in the form of a Java keystore (ee Section 3.3, "Specifying an Application's Own Certificate" ). One or more lists of trusted certification authorities (CA) are made available to the client. (see Section 3.2, "Specifying Trusted CA Certificates" ). During the security handshake, the server sends its certificate chain to the client (see Figure 3.1, "Target Authentication Only" ). The client then searches its trusted CA lists to find a CA certificate that matches one of the CA certificates in the server's certificate chain. HTTPS example On the client side, there are no policy settings required for target-only authentication. Simply configure your client without associating an X.509 certificate with the HTTPS port. You must provide the client with a list of trusted CA certificates, however (see Section 3.2, "Specifying Trusted CA Certificates" ). On the server side, in the server's XML configuration file, make sure that the sec:clientAuthentication element does not require client authentication. This element can be omitted, in which case the default policy is to not require client authentication. However, if the sec:clientAuthentication element is present, it should be configured as follows: Important You must set secureSocketProtocol to TLSv1 on the server side, in order to protect against the Poodle vulnerability (CVE-2014-3566) Where the want attribute is set to false (the default), specifying that the server does not request an X.509 certificate from the client during a TLS handshake. The required attribute is also set to false (the default), specifying that the absence of a client certificate does not trigger an exception during the TLS handshake. Note The want attribute can be set either to true or to false . If set to true , the want setting causes the server to request a client certificate during the TLS handshake, but no exception is raised for clients lacking a certificate, so long as the required attribute is set to false . It is also necessary to associate an X.509 certificate with the server's HTTPS port (see Section 3.3, "Specifying an Application's Own Certificate" ) and to provide the server with a list of trusted CA certificates (see Section 3.2, "Specifying Trusted CA Certificates" ). Note The choice of cipher suite can potentially affect whether or not target-only authentication is supported (see Chapter 4, Configuring HTTPS Cipher Suites ). 3.1.2. Mutual Authentication Overview When an application is configured for mutual authentication, the target authenticates itself to the client and the client authenticates itself to the target. This scenario is illustrated in Figure 3.2, "Mutual Authentication" . In this case, the server and the client each require an X.509 certificate for the security handshake. Figure 3.2. Mutual Authentication Security handshake Prior to running the application, the client and server must be set up as follows: Both client and server have an associated certificate chain (see Section 3.3, "Specifying an Application's Own Certificate" ). Both client and server are configured with lists of trusted certification authorities (CA) (see Section 3.2, "Specifying Trusted CA Certificates" ). During the TLS handshake, the server sends its certificate chain to the client, and the client sends its certificate chain to the server-see Figure 3.1, "Target Authentication Only" . HTTPS example On the client side, there are no policy settings required for mutual authentication. Simply associate an X.509 certificate with the client's HTTPS port (see Section 3.3, "Specifying an Application's Own Certificate" ). You also need to provide the client with a list of trusted CA certificates (see Section 3.2, "Specifying Trusted CA Certificates" ). On the server side, in the server's XML configuration file, make sure that the sec:clientAuthentication element is configured to require client authentication. For example: Important You must set secureSocketProtocol to TLSv1 on the server side, in order to protect against the Poodle vulnerability (CVE-2014-3566) Where the want attribute is set to true , specifying that the server requests an X.509 certificate from the client during a TLS handshake. The required attribute is also set to true , specifying that the absence of a client certificate triggers an exception during the TLS handshake. It is also necessary to associate an X.509 certificate with the server's HTTPS port (see Section 3.3, "Specifying an Application's Own Certificate" ) and to provide the server with a list of trusted CA certificates (see Section 3.2, "Specifying Trusted CA Certificates" ). Note The choice of cipher suite can potentially affect whether or not mutual authentication is supported (see Chapter 4, Configuring HTTPS Cipher Suites ). 3.2. Specifying Trusted CA Certificates 3.2.1. When to Deploy Trusted CA Certificates Overview When an application receives an X.509 certificate during an SSL/TLS handshake, the application decides whether or not to trust the received certificate by checking whether the issuer CA is one of a pre-defined set of trusted CA certificates. If the received X.509 certificate is validly signed by one of the application's trusted CA certificates, the certificate is deemed trustworthy; otherwise, it is rejected. Which applications need to specify trusted CA certificates? Any application that is likely to receive an X.509 certificate as part of an HTTPS handshake must specify a list of trusted CA certificates. For example, this includes the following types of application: All HTTPS clients. Any HTTPS servers that support mutual authentication . 3.2.2. Specifying Trusted CA Certificates for HTTPS CA certificate format CA certificates must be provided in Java keystore format. CA certificate deployment in the Apache CXF configuration file To deploy one or more trusted root CAs for the HTTPS transport, perform the following steps: Assemble the collection of trusted CA certificates that you want to deploy. The trusted CA certificates can be obtained from public CAs or private CAs (for details of how to generate your own CA certificates, see Section 2.5, "Creating Your Own Certificates" ). The trusted CA certificates can be in any format that is compatible with the Java keystore utility; for example, PEM format. All you need are the certificates themselves-the private keys and passwords are not required. Given a CA certificate, cacert.pem , in PEM format, you can add the certificate to a JKS truststore (or create a new truststore) by entering the following command: Where CAAlias is a convenient tag that enables you to access this particular CA certificate using the keytool utility. The file, truststore.jks , is a keystore file containing CA certificates-if this file does not already exist, the keytool utility creates one. The StorePass password provides access to the keystore file, truststore.jks . Repeat step 2 as necessary, to add all of the CA certificates to the truststore file, truststore.jks . Edit the relevant XML configuration files to specify the location of the truststore file. You must include the sec:trustManagers element in the configuration of the relevant HTTPS ports. For example, you can configure a client port as follows: Where the type attribute specifes that the truststore uses the JKS keystore implementation and StorePass is the password needed to access the truststore.jks keystore. Configure a server port as follows: Important You must set secureSocketProtocol to TLSv1 on the server side, in order to protect against the Poodle vulnerability (CVE-2014-3566) Warning The directory containing the truststores (for example, X509Deploy /truststores/ ) should be a secure directory (that is, writable only by the administrator). 3.3. Specifying an Application's Own Certificate 3.3.1. Deploying Own Certificate for HTTPS Overview When working with the HTTPS transport the application's certificate is deployed using the XML configuration file. Procedure To deploy an application's own certificate for the HTTPS transport, perform the following steps: Obtain an application certificate in Java keystore format, CertName .jks . For instructions on how to create a certificate in Java keystore format, see Section 2.5.3, "Use the CA to Create Signed Certificates in a Java Keystore" . Note Some HTTPS clients (for example, Web browsers) perform a URL integrity check , which requires a certificate's identity to match the hostname on which the server is deployed. See Section 2.4, "Special Requirements on HTTPS Certificates" for details. Copy the certificate's keystore, CertName .jks , to the certificates directory on the deployment host; for example, X509Deploy /certs . The certificates directory should be a secure directory that is writable only by administrators and other privileged users. Edit the relevant XML configuration file to specify the location of the certificate keystore, CertName .jks . You must include the sec:keyManagers element in the configuration of the relevant HTTPS ports. For example, you can configure a client port as follows: Where the keyPassword attribute specifies the password needed to decrypt the certificate's private key (that is, CertPassword ), the type attribute specifes that the truststore uses the JKS keystore implementation, and the password attribute specifies the password required to access the CertName .jks keystore (that is, KeystorePassword ). Configure a server port as follows: Important You must set secureSocketProtocol to TLSv1 on the server side, in order to protect against the Poodle vulnerability (CVE-2014-3566) Warning The directory containing the application certificates (for example, X509Deploy /certs/ ) should be a secure directory (that is, readable and writable only by the administrator). Warning The directory containing the XML configuration file should be a secure directory (that is, readable and writable only by the administrator), because the configuration file contains passwords in plain text.
|
[
"<http:destination id=\"{ Namespace } PortName .http-destination\"> <http:tlsServerParameters secureSocketProtocol=\"TLSv1\"> <sec:clientAuthentication want=\"false\" required=\"false\"/> </http:tlsServerParameters> </http:destination>",
"<http:destination id=\"{ Namespace } PortName .http-destination\"> <http:tlsServerParameters secureSocketProtocol=\"TLSv1\"> <sec:clientAuthentication want=\"true\" required=\"true\"/> </http:tlsServerParameters> </http:destination>",
"keytool -import -file cacert.pem -alias CAAlias -keystore truststore.jks -storepass StorePass",
"<!-- Client port configuration --> <http:conduit id=\"{ Namespace } PortName .http-conduit\"> <http:tlsClientParameters> <sec:trustManagers> <sec:keyStore type=\"JKS\" password=\" StorePass \" file=\"certs/truststore.jks\"/> </sec:trustManagers> </http:tlsClientParameters> </http:conduit>",
"<!-- Server port configuration --> <http:destination id=\"{ Namespace } PortName .http-destination\"> <http:tlsServerParameters secureSocketProtocol=\"TLSv1\"> <sec:trustManagers> <sec:keyStore type=\"JKS\" password=\" StorePass \" file=\"certs/truststore.jks\"/> </sec:trustManagers> </http:tlsServerParameters> </http:destination>",
"<http:conduit id=\"{ Namespace } PortName .http-conduit\"> <http:tlsClientParameters> <sec:keyManagers keyPassword=\" CertPassword \"> <sec:keyStore type=\"JKS\" password=\" KeystorePassword \" file=\"certs/ CertName .jks\"/> </sec:keyManagers> </http:tlsClientParameters> </http:conduit>",
"<http:destination id=\"{ Namespace } PortName .http-destination\"> <http:tlsServerParameters secureSocketProtocol=\"TLSv1\"> <sec:keyManagers keyPassword=\" CertPassword \"> <sec:keyStore type=\"JKS\" password=\" KeystorePassword \" file=\"certs/ CertName .jks\"/> </sec:keyManagers> </http:tlsServerParameters> </http:destination>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/ConfigTLS
|
Chapter 2. Setting up automation mesh
|
Chapter 2. Setting up automation mesh Configure the Ansible Automation Platform installer to set up automation mesh for your Ansible environment. Perform additional tasks to customize your installation, such as importing a Certificate Authority (CA) certificate. 2.1. automation mesh Installation You use the Ansible Automation Platform installation program to set up automation mesh or to upgrade to automation mesh. To provide Ansible Automation Platform with details about the nodes, groups, and peer relationships in your mesh network, you define them in an the inventory file in the installer bundle. Additional Resources Red Hat Ansible Automation Platform Installation Guide Automation Mesh Design Patterns 2.2. Importing a Certificate Authority (CA) certificate A Certificate Authority (CA) verifies and signs individual node certificates in an automation mesh environment. You can provide your own CA by specifying the path to the certificate and the private RSA key file in the inventory file of your Red Hat Ansible Automation Platform installer. Note The Ansible Automation Platform installation program generates a CA if you do not provide one. Procedure Open the inventory file for editing. Add the mesh_ca_keyfile variable and specify the full path to the private RSA key ( .key ). Add the mesh_ca_certfile variable and specify the full path to the CA certificate file ( .crt ). Save the changes to the inventory file. Example With the CA files added to the inventory file, run the installation program to apply the CA. This process copies the CA to the to /etc/receptor/tls/ca/ directory on each control and execution node on your mesh network. Additional resources Red Hat Ansible Automation Platform System Requirements
|
[
"[all:vars] mesh_ca_keyfile=/tmp/ <mesh_CA> .key mesh_ca_certfile=/tmp/ <mesh_CA> .crt"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_automation_mesh_guide_for_vm-based_installations/setting-up
|
2.3. Configuring ACPI For Use with Integrated Fence Devices
|
2.3. Configuring ACPI For Use with Integrated Fence Devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. Note For the most current information about integrated fence devices supported by Red Hat Cluster Suite, refer to http://www.redhat.com/cluster_suite/hardware/ . If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now ). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (refer to note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover. Note The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds. To disable ACPI Soft-Off, use chkconfig management and verify that the node turns off immediately when fenced. The preferred way to disable ACPI Soft-Off is with chkconfig management: however, if that method is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods: Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. Appending acpi=off to the kernel boot command line of the /boot/grub/grub.conf file Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. The following sections provide procedures for the preferred method and alternate methods of disabling ACPI Soft-Off: Section 2.3.1, "Disabling ACPI Soft-Off with chkconfig Management" - Preferred method Section 2.3.2, "Disabling ACPI Soft-Off with the BIOS" - First alternate method Section 2.3.3, "Disabling ACPI Completely in the grub.conf File" - Second alternate method 2.3.1. Disabling ACPI Soft-Off with chkconfig Management You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon ( acpid ) from chkconfig management or by turning off acpid . Note This is the preferred method of disabling ACPI Soft-Off. Disable ACPI Soft-Off with chkconfig management at each cluster node as follows: Run either of the following commands: chkconfig --del acpid - This command removes acpid from chkconfig management. - OR - chkconfig --level 2345 acpid off - This command turns off acpid . Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-acpi-ca
|
Chapter 18. KubeControllerManager [operator.openshift.io/v1]
|
Chapter 18. KubeControllerManager [operator.openshift.io/v1] Description KubeControllerManager provides information to configure an operator to manage kube-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes Controller Manager status object status is the most recently observed status of the Kubernetes Controller Manager 18.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes Controller Manager Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. useMoreSecureServiceCA boolean useMoreSecureServiceCA indicates that the service-ca.crt provided in SA token volumes should include only enough certificates to validate service serving certificates. Once set to true, it cannot be set to false. Even if someone finds a way to set it back to false, the service-ca.crt files that previously existed will only have the more secure content. 18.1.2. .status Description status is the most recently observed status of the Kubernetes Controller Manager Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 18.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 18.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 18.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 18.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 18.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 18.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 18.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubecontrollermanagers DELETE : delete collection of KubeControllerManager GET : list objects of kind KubeControllerManager POST : create a KubeControllerManager /apis/operator.openshift.io/v1/kubecontrollermanagers/{name} DELETE : delete a KubeControllerManager GET : read the specified KubeControllerManager PATCH : partially update the specified KubeControllerManager PUT : replace the specified KubeControllerManager /apis/operator.openshift.io/v1/kubecontrollermanagers/{name}/status GET : read status of the specified KubeControllerManager PATCH : partially update status of the specified KubeControllerManager PUT : replace status of the specified KubeControllerManager 18.2.1. /apis/operator.openshift.io/v1/kubecontrollermanagers Table 18.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeControllerManager Table 18.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeControllerManager Table 18.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.5. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManagerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeControllerManager Table 18.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.7. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.8. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 202 - Accepted KubeControllerManager schema 401 - Unauthorized Empty 18.2.2. /apis/operator.openshift.io/v1/kubecontrollermanagers/{name} Table 18.9. Global path parameters Parameter Type Description name string name of the KubeControllerManager Table 18.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeControllerManager Table 18.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 18.12. Body parameters Parameter Type Description body DeleteOptions schema Table 18.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeControllerManager Table 18.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.15. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeControllerManager Table 18.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.17. Body parameters Parameter Type Description body Patch schema Table 18.18. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeControllerManager Table 18.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.20. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.21. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 401 - Unauthorized Empty 18.2.3. /apis/operator.openshift.io/v1/kubecontrollermanagers/{name}/status Table 18.22. Global path parameters Parameter Type Description name string name of the KubeControllerManager Table 18.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeControllerManager Table 18.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.25. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeControllerManager Table 18.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.27. Body parameters Parameter Type Description body Patch schema Table 18.28. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeControllerManager Table 18.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.30. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.31. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/kubecontrollermanager-operator-openshift-io-v1
|
Chapter 21. Red Hat Software Collections
|
Chapter 21. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Red Hat Developer Toolset is included as a separate Software Collection. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Since Red Hat Software Collections 2.3, the Eclipse development platform is provided as a separate Software Collection. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/chap-red_hat_enterprise_linux-7.3_release_notes-red_hat_software_collections
|
Chapter 10. Migrating your applications
|
Chapter 10. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or from the command line . You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. During migration, MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 10.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.18 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. Additional resources for migration prerequisites Manually exposing a secure registry for OpenShift Container Platform 3 Updating deprecated internal images 10.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 10.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 10.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an {OCP} cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 10.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 10.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources MTC file system copy method MTC snapshot copy method 10.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
|
[
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc create token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"az group list",
"{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migrating_from_version_3_to_4/migrating-applications-3-4
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_gfs2_file_systems/proc_providing-feedback-on-red-hat-documentation_configuring-gfs2-file-systems
|
Chapter 8. Using Streams for Apache Kafka with Kafka Connect
|
Chapter 8. Using Streams for Apache Kafka with Kafka Connect Use Kafka Connect to stream data between Kafka and external systems. Kafka Connect provides a framework for moving large amounts of data while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with database, storage, and messaging systems that are external to your Kafka cluster. Kafka Connect runs in standalone or distributed modes. Standalone mode In standalone mode, Kafka Connect runs on a single node. Standalone mode is intended for development and testing. Distributed mode In distributed mode, Kafka Connect runs across one or more worker nodes and the workloads are distributed among them. Distributed mode is intended for production. Kafka Connect uses connector plugins that implement connectivity for different types of external systems. There are two types of connector plugins: sink and source. Sink connectors stream data from Kafka to external systems. Source connectors stream data from external systems into Kafka. You can also use the Kafka Connect REST API to create, manage, and monitor connector instances. Connector configuration specifies details such as the source or sink connectors and the Kafka topics to read from or write to. How you manage the configuration depends on whether you are running Kafka Connect in standalone or distributed mode. In standalone mode, you can provide the connector configuration as JSON through the Kafka Connect REST API or you can use properties files to define the configuration. In distributed mode, you can only provide the connector configuration as JSON through the Kafka Connect REST API. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . 8.1. Using Kafka Connect in standalone mode In Kafka Connect standalone mode, connectors run on the same node as the Kafka Connect worker process, which runs as a single process in a single JVM. This means that the worker process and connectors share the same resources, such as CPU, memory, and disk. 8.1.1. Configuring Kafka Connect in standalone mode To configure Kafka Connect in standalone mode, edit the config/connect-standalone.properties configuration file. The following options are the most important. bootstrap.servers A list of Kafka broker addresses used as bootstrap connections to Kafka. For example, kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 . key.converter The class used to convert message keys to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . offset.storage.file.filename Specifies the file in which the offset data is stored. Connector plugins open client connections to the Kafka brokers using the bootstrap address. To configure these connections, use the standard Kafka producer and consumer configuration options prefixed by producer. or consumer. . 8.1.2. Running Kafka Connect in standalone mode Configure and run Kafka Connect in standalone mode. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. You have specified connector configuration in properties files. You can also use the Kafka Connect REST API to manage connectors . Procedure Edit the /opt/kafka/config/connect-standalone.properties Kafka Connect configuration file and set bootstrap.server to point to your Kafka brokers. For example: bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 Start Kafka Connect with the configuration file and specify one or more connector configurations. su - kafka /opt/kafka/bin/connect-standalone.sh /opt/kafka/config/connect-standalone.properties connector1.properties [connector2.properties ...] Verify that Kafka Connect is running. jcmd | grep ConnectStandalone 8.2. Using Kafka Connect in distributed mode In distributed mode, Kafka Connect runs as a cluster of worker processes, with each worker running on a separate node. Connectors can run on any worker in the cluster, allowing for greater scalability and fault tolerance. The connectors are managed by the workers, which coordinate with each other to distribute the work and ensure that each connector is running on a single node at any given time. 8.2.1. Configuring Kafka Connect in distributed mode To configure Kafka Connect in distributed mode, edit the config/connect-distributed.properties configuration file. The following options are the most important. bootstrap.servers A list of Kafka broker addresses used as bootstrap connections to Kafka. For example, kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 . key.converter The class used to convert message keys to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. For example, org.apache.kafka.connect.json.JsonConverter . group.id The name of the distributed Kafka Connect cluster. This must be unique and must not conflict with another consumer group ID. The default value is connect-cluster . config.storage.topic The Kafka topic used to store connector configurations. The default value is connect-configs . offset.storage.topic The Kafka topic used to store offsets. The default value is connect-offset . status.storage.topic The Kafka topic used for worker node statuses. The default value is connect-status . Streams for Apache Kafka includes an example configuration file for Kafka Connect in distributed mode - see config/connect-distributed.properties in the Streams for Apache Kafka installation directory. Connector plugins open client connections to the Kafka brokers using the bootstrap address. To configure these connections, use the standard Kafka producer and consumer configuration options prefixed by producer. or consumer. . 8.2.2. Running Kafka Connect in distributed mode Configure and run Kafka Connect in distributed mode. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Running the cluster Edit the /opt/kafka/config/connect-distributed.properties Kafka Connect configuration file on all Kafka Connect worker nodes. Set the bootstrap.server option to point to your Kafka brokers. Set the group.id option. Set the config.storage.topic option. Set the offset.storage.topic option. Set the status.storage.topic option. For example: bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 group.id=my-group-id config.storage.topic=my-group-id-configs offset.storage.topic=my-group-id-offsets status.storage.topic=my-group-id-status Start the Kafka Connect workers with the /opt/kafka/config/connect-distributed.properties configuration file on all Kafka Connect nodes. su - kafka /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties Verify that Kafka Connect is running. jcmd | grep ConnectDistributed Use the Kafka Connect REST API to manage connectors . 8.3. Managing connectors The Kafka Connect REST API provides endpoints for creating, updating, and deleting connectors directly. You can also use the API to check the status of connectors or change logging levels. When you create a connector through the API, you provide the configuration details for the connector as part of the API call. You can also add and manage connectors as plugins. Plugins are packaged as JAR files that contain the classes to implement the connectors through the Kafka Connect API. You just need to specify the plugin in the classpath or add it to a plugin path for Kafka Connect to run the connector plugin on startup. In addition to using the Kafka Connect REST API or plugins to manage connectors, you can also add connector configuration using properties files when running Kafka Connect in standalone mode. To do this, you simply specify the location of the properties file when starting the Kafka Connect worker process. The properties file should contain the configuration details for the connector, including the connector class, source and destination topics, and any required authentication or serialization settings. 8.3.1. Limiting access to the Kafka Connect API The Kafka Connect REST API can be accessed by anyone who has authenticated access and knows the endpoint URL, which includes the hostname/IP address and port number. It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. For improved security, we recommend configuring the following properties for the Kafka Connect API: (Kafka 3.4 or later) org.apache.kafka.disallowed.login.modules to specifically exclude insecure login modules connector.client.config.override.policy set to NONE to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses 8.3.2. Configuring connectors Use the Kafka Connect REST API or properties files to create, manage, and monitor connector instances. You can use the REST API when using Kafka Connect in standalone or distributed mode. You can use properties files when using Kafka Connect in standalone mode. 8.3.2.1. Using the Kafka Connect REST API to manage connectors When using the Kafka Connect REST API, you can create connectors dynamically by sending PUT or POST HTTP requests to the Kafka Connect REST API, specifying the connector configuration details in the request body. Tip When you use the PUT command, it's the same command for starting and updating connectors. The REST interface listens on port 8083 by default and supports the following endpoints: GET /connectors Return a list of existing connectors. POST /connectors Create a connector. The request body has to be a JSON object with the connector configuration. GET /connectors/<connector_name> Get information about a specific connector. GET /connectors/<connector_name>/config Get configuration of a specific connector. PUT /connectors/<connector_name>/config Update the configuration of a specific connector. GET /connectors/<connector_name>/status Get the status of a specific connector. GET /connectors/<connector_name>/tasks Get a list of tasks for a specific connector GET /connectors/<connector_name>/tasks/ <task_id> /status Get the status of a task for a specific connector PUT /connectors/<connector_name>/pause Pause the connector and all its tasks. The connector will stop processing any messages. PUT /connectors/<connector_name>/stop Stop the connector and all its tasks. The connector will stop processing any messages. Stopping a connector from running may be more suitable for longer durations than just pausing. PUT /connectors/<connector_name>/resume Resume a paused connector. POST /connectors/<connector_name>/restart Restart a connector in case it has failed. POST /connectors/<connector_name>/tasks/ <task_id> /restart Restart a specific task. DELETE /connectors/<connector_name> Delete a connector. GET /connectors/<connector_name>/topics Get the topics for a specific connector. PUT /connectors/<connector_name>/topics/reset Empty the set of active topics for a specific connector. GET /connectors/<connector_name>/offsets Get the current offsets for a connector. DELETE /connectors/<connector_name>/offsets Reset the offsets for a connector, which must be in a stopped state. PATCH /connectors/<connector_name>/offsets Adjust the offsets (using an offset property in the request) for a connector, which must be in a stopped state. GET /connector-plugins Get a list of all supported connector plugins. GET /connector-plugins/<connector_plugin_type>/config Get the configuration for a connector plugin. PUT /connector-plugins/<connector_type>/config/validate Validate connector configuration. 8.3.2.2. Specifying connector configuration properties To configure a Kafka Connect connector, you need to specify the configuration details for source or sink connectors. There are two ways to do this: through the Kafka Connect REST API, using JSON to provide the configuration, or by using properties files to define the configuration properties. The specific configuration options available for each type of connector may differ, but both methods provide a flexible way to specify the necessary settings. The following options apply to all connectors: name The name of the connector, which must be unique within the current Kafka Connect instance. connector.class The class of the connector plug-in. For example, org.apache.kafka.connect.file.FileStreamSinkConnector . tasks.max The maximum number of tasks that the specified connector can use. Tasks enable the connector to perform work in parallel. The connector might create fewer tasks than specified. key.converter The class used to convert message keys to and from Kafka format. This overrides the default value set by the Kafka Connect configuration. For example, org.apache.kafka.connect.json.JsonConverter . value.converter The class used to convert message payloads to and from Kafka format. This overrides the default value set by the Kafka Connect configuration. For example, org.apache.kafka.connect.json.JsonConverter . You must set at least one of the following options for sink connectors: topics A comma-separated list of topics used as input. topics.regex A Java regular expression of topics used as input. For all other options, see the connector properties in the Apache Kafka documentation . Note Streams for Apache Kafka includes the example connector configuration files config/connect-file-sink.properties and config/connect-file-source.properties in the Streams for Apache Kafka installation directory. Additional resources Kafka Connect REST API OpenAPI documentation 8.3.3. Creating connectors using the Kafka Connect API Use the Kafka Connect REST API to create a connector to use with Kafka Connect. Prerequisites A Kafka Connect installation. Procedure Prepare a JSON payload with the connector configuration. For example: { "name": "my-connector", "config": { "connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector", "tasks.max": "1", "topics": "my-topic-1,my-topic-2", "file": "/tmp/output-file.txt" } } Send a POST request to <KafkaConnectAddress> :8083/connectors to create the connector. The following example uses curl : curl -X POST -H "Content-Type: application/json" --data @sink-connector.json http://connect0.my-domain.com:8083/connectors Verify that the connector was deployed by sending a GET request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors 8.3.4. Deleting connectors using the Kafka Connect API Use the Kafka Connect REST API to delete a connector from Kafka Connect. Prerequisites A Kafka Connect installation. Deleting connectors Verify that the connector exists by sending a GET request to <KafkaConnectAddress> :8083/connectors/ <ConnectorName> . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors To delete the connector, send a DELETE request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl -X DELETE http://connect0.my-domain.com:8083/connectors/my-connector Verify that the connector was deleted by sending a GET request to <KafkaConnectAddress> :8083/connectors . The following example uses curl : curl http://connect0.my-domain.com:8083/connectors 8.3.5. Adding connector plugins Kafka provides example connectors to use as a starting point for developing connectors. The following example connectors are included with Streams for Apache Kafka: FileStreamSink Reads data from Kafka topics and writes the data to a file. FileStreamSource Reads data from a file and sends the data to Kafka topics. Both connectors are contained in the libs/connect-file-<kafka_version>.redhat-<build>.jar plugin. To use the connector plugins in Kafka Connect, you can add them to the classpath or specify a plugin path in the Kafka Connect properties file and copy the plugins to the location. Specifying the example connectors in the classpath CLASSPATH=/opt/kafka/libs/connect-file-<kafka_version>.redhat-<build>.jar opt/kafka/bin/connect-distributed.sh Setting a plugin path plugin.path=/opt/kafka/connector-plugins,/opt/connectors The plugin.path configuration option can contain a comma-separated list of paths. You can add more connector plugins if needed. Kafka Connect searches for and runs connector plugins at startup. Note When running Kafka Connect in distributed mode, plugins must be made available on all worker nodes.
|
[
"bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092",
"su - kafka /opt/kafka/bin/connect-standalone.sh /opt/kafka/config/connect-standalone.properties connector1.properties [connector2.properties ...]",
"jcmd | grep ConnectStandalone",
"bootstrap.servers=kafka0.my-domain.com:9092,kafka1.my-domain.com:9092,kafka2.my-domain.com:9092 group.id=my-group-id config.storage.topic=my-group-id-configs offset.storage.topic=my-group-id-offsets status.storage.topic=my-group-id-status",
"su - kafka /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties",
"jcmd | grep ConnectDistributed",
"{ \"name\": \"my-connector\", \"config\": { \"connector.class\": \"org.apache.kafka.connect.file.FileStreamSinkConnector\", \"tasks.max\": \"1\", \"topics\": \"my-topic-1,my-topic-2\", \"file\": \"/tmp/output-file.txt\" } }",
"curl -X POST -H \"Content-Type: application/json\" --data @sink-connector.json http://connect0.my-domain.com:8083/connectors",
"curl http://connect0.my-domain.com:8083/connectors",
"curl http://connect0.my-domain.com:8083/connectors",
"curl -X DELETE http://connect0.my-domain.com:8083/connectors/my-connector",
"curl http://connect0.my-domain.com:8083/connectors",
"CLASSPATH=/opt/kafka/libs/connect-file-<kafka_version>.redhat-<build>.jar opt/kafka/bin/connect-distributed.sh",
"plugin.path=/opt/kafka/connector-plugins,/opt/connectors"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-kafka-connect-str
|
Chapter 18. Counting events during process execution with perf stat
|
Chapter 18. Counting events during process execution with perf stat You can use the perf stat command to count hardware and software events during process execution. Prerequisites You have the perf user space tool installed as described in Installing perf . 18.1. The purpose of perf stat The perf stat command executes a specified command, keeps a running count of hardware and software event occurrences during the commands execution, and generates statistics of these counts. If you do not specify any events, then perf stat counts a set of common hardware and software events. 18.2. Counting events with perf stat You can use perf stat to count hardware and software event occurrences during command execution and generate statistics of these counts. By default, perf stat operates in per-thread mode. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Count the events. Running the perf stat command without root access will only count events occurring in the user space: Example 18.1. Output of perf stat ran without root access As you can see in the example, when perf stat runs without root access the event names are followed by :u , indicating that these events were counted only in the user-space. To count both user-space and kernel-space events, you must have root access when running perf stat : Example 18.2. Output of perf stat ran with root access By default, perf stat operates in per-thread mode. To change to CPU-wide event counting, pass the -a option to perf stat . To count CPU-wide events, you need root access: Additional resources perf-stat(1) man page on your system 18.3. Interpretation of perf stat output perf stat executes a specified command and counts event occurrences during the commands execution and displays statistics of these counts in three columns: The number of occurrences counted for a given event The name of the event that was counted When related metrics are available, a ratio or percentage is displayed after the hash sign ( # ) in the right-most column. For example, when running in default mode, perf stat counts both cycles and instructions and, therefore, calculates and displays instructions per cycle in the right-most column. You can see similar behavior with regard to branch-misses as a percent of all branches since both events are counted by default. 18.4. Attaching perf stat to a running process You can attach perf stat to a running process. This will instruct perf stat to count event occurrences only in the specified processes during the execution of a command. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Attach perf stat to a running process: The example counts events in the processes with the IDs of ID1 and ID2 for a time period of seconds seconds as dictated by using the sleep command. Additional resources perf-stat(1) man page on your system
|
[
"perf stat ls",
"Desktop Documents Downloads Music Pictures Public Templates Videos Performance counter stats for 'ls': 1.28 msec task-clock:u # 0.165 CPUs utilized 0 context-switches:u # 0.000 M/sec 0 cpu-migrations:u # 0.000 K/sec 104 page-faults:u # 0.081 M/sec 1,054,302 cycles:u # 0.823 GHz 1,136,989 instructions:u # 1.08 insn per cycle 228,531 branches:u # 178.447 M/sec 11,331 branch-misses:u # 4.96% of all branches 0.007754312 seconds time elapsed 0.000000000 seconds user 0.007717000 seconds sys",
"perf stat ls",
"Desktop Documents Downloads Music Pictures Public Templates Videos Performance counter stats for 'ls': 3.09 msec task-clock # 0.119 CPUs utilized 18 context-switches # 0.006 M/sec 3 cpu-migrations # 0.969 K/sec 108 page-faults # 0.035 M/sec 6,576,004 cycles # 2.125 GHz 5,694,223 instructions # 0.87 insn per cycle 1,092,372 branches # 352.960 M/sec 31,515 branch-misses # 2.89% of all branches 0.026020043 seconds time elapsed 0.000000000 seconds user 0.014061000 seconds sys",
"perf stat -a ls",
"perf stat -p ID1,ID2 sleep seconds"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/counting-events-during-process-execution-with-perf-stat_monitoring-and-managing-system-status-and-performance
|
Chapter 1. OpenShift Container Platform CLI tools overview
|
Chapter 1. OpenShift Container Platform CLI tools overview A user performs a range of operations while working on OpenShift Container Platform such as the following: Managing clusters Building, deploying, and managing applications Managing deployment processes Developing Operators Creating and maintaining Operator catalogs OpenShift Container Platform offers a set of command-line interface (CLI) tools that simplify these tasks by enabling users to perform various administration and development operations from the terminal. These tools expose simple commands to manage the applications, as well as interact with each component of the system. 1.1. List of CLI tools The following set of CLI tools are available in OpenShift Container Platform: OpenShift CLI (oc) : This is the most commonly used CLI tool by OpenShift Container Platform users. It helps both cluster administrators and developers to perform end-to-end operations across OpenShift Container Platform using the terminal. Unlike the web console, it allows the user to work directly with the project source code using command scripts. Developer CLI (odo) : The odo CLI tool helps developers focus on their main goal of creating and maintaining applications on OpenShift Container Platform by abstracting away complex Kubernetes and OpenShift Container Platform concepts. It helps the developers to write, build, and debug applications on a cluster from the terminal without the need to administer the cluster. Helm CLI : Helm is a package manager for Kubernetes applications which enables defining, installing, and upgrading applications packaged as Helm charts. Helm CLI helps the user deploy applications and services to OpenShift Container Platform clusters using simple commands from the terminal. Knative CLI (kn) : The Knative ( kn ) CLI tool provides simple and intuitive terminal commands that can be used to interact with OpenShift Serverless components, such as Knative Serving and Eventing. Pipelines CLI (tkn) : OpenShift Pipelines is a continuous integration and continuous delivery (CI/CD) solution in OpenShift Container Platform, which internally uses Tekton. The tkn CLI tool provides simple and intuitive commands to interact with OpenShift Pipelines using the terminal. opm CLI : The opm CLI tool helps the Operator developers and cluster administrators to create and maintain the catalogs of Operators from the terminal. Operator SDK : The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cli_tools/cli-tools-overview
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.