title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Administering hosts
Chapter 2. Administering hosts This chapter describes creating, registering, administering, and removing hosts. 2.1. Creating a host in Red Hat Satellite Use this procedure to create a host in Red Hat Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . On the Host tab, enter the required details. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove. On the Puppet Classes tab, select the Puppet classes you want to include. On the Interfaces tab: For each interface, click Edit in the Actions column and configure the following settings as required: Type - For a Bond or BMC interface, use the Type list and select the interface type. MAC address - Enter the MAC address. DNS name - Enter the DNS name that is known to the DNS server. This is used for the host part of the FQDN. Domain - Select the domain name of the provisioning network. This automatically updates the Subnet list with a selection of suitable subnets. IPv4 Subnet - Select an IPv4 subnet for the host from the list. IPv6 Subnet - Select an IPv6 subnet for the host from the list. IPv4 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. The address can be omitted if provisioning tokens are enabled, if the domain does not manage DNS, if the subnet does not manage reverse DNS, or if the subnet does not manage DHCP reservations. IPv6 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. Managed - Select this checkbox to configure the interface during provisioning to use the Capsule provided DHCP and DNS services. Primary - Select this checkbox to use the DNS name from this interface as the host portion of the FQDN. Provision - Select this checkbox to use this interface for provisioning. This means TFTP boot will take place using this interface, or in case of image based provisioning, the script to complete the provisioning will be executed through this interface. Note that many provisioning tasks, such as downloading packages by anaconda or Puppet setup in a %post script, will use the primary interface. Virtual NIC - Select this checkbox if this interface is not a physical device. This setting has two options: Tag - Optionally set a VLAN tag. If unset, the tag will be the VLAN ID of the subnet. Attached to - Enter the device name of the interface this virtual interface is attached to. Click OK to save the interface configuration. Optionally, click Add Interface to include an additional network interface. For more information, see Chapter 5, Adding network interfaces . Click Submit to apply the changes and exit. On the Operating System tab, enter the required details. For Red Hat operating systems, select Synced Content for Media Selection . If you want to use non Red Hat operating systems, select All Media , then select the installation media from the Media Selection list. You can select a partition table from the list or enter a custom partition table in the Custom partition table field. You cannot specify both. On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Puppet Class, Ansible playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter . When you create a Red Hat Enterprise Linux 8 host, you can set system purpose attributes. System purpose attributes define what subscriptions to attach automatically on host creation. In the Host Parameters area, enter the following parameter names with the corresponding values. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . syspurpose_role syspurpose_sla syspurpose_usage syspurpose_addons If you want to create a host with pull mode for remote job execution, add the enable-remote-execution-pull parameter with type boolean set to true . For more information, see Section 12.4, "Transport modes for remote execution" . On the Additional Information tab, enter additional information about the host. Click Submit to complete your provisioning request. CLI procedure To create a host associated to a host group, enter the following command: This command prompts you to specify the root password. It is required to specify the host's IP and MAC address. Other properties of the primary network interface can be inherited from the host group or set using the --subnet , and --domain parameters. You can set additional interfaces using the --interface option, which accepts a list of key-value pairs. For the list of available interface settings, enter the hammer host create --help command. 2.2. Cloning hosts You can clone existing hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . In the Actions menu, click Clone . On the Host tab, ensure to provide a Name different from the original host. On the Interfaces tab, ensure to provide a different IP address. Click Submit to clone the host. For more information, see Section 2.1, "Creating a host in Red Hat Satellite" . 2.3. Associating a virtual machine with Satellite from a hypervisor Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select a compute resource. On the Virtual Machines tab, click Associate VM from the Actions menu. 2.4. Editing the system purpose of a host You can edit the system purpose attributes for a Red Hat Enterprise Linux host. System purpose allows you to set the intended use of a system on your network and improves reporting accuracy in the Subscriptions service of the Red Hat Hybrid Cloud Console. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The host that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Overview tab, click Edit on the System purpose card. Select the system purpose attributes for your host. Click Save . CLI procedure Log in to the host and edit the required system purpose attributes. For example, set the usage type to Production , the role to Red Hat Enterprise Linux Server , and add the addon add on. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Verify the system purpose attributes for this host: Automatically attach subscriptions to this host: Verify the system purpose status for this host: 2.5. Editing the system purpose of multiple hosts You can edit the system purpose attributes of Red Hat Enterprise Linux hosts. System purpose attributes define which subscriptions to attach automatically to hosts. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The hosts that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select Red Hat Enterprise Linux 8 hosts that you want to edit. Click the Select Action list and select Manage System Purpose . Select the system purpose attributes that you want to assign to the selected hosts. You can select one of the following values: A specific attribute to set an all selected hosts. No Change to keep the attribute set on the selected hosts. None (Clear) to clear the attribute on the selected hosts. Click Assign . In the Satellite web UI, navigate to Hosts > Content Hosts and select the same Red Hat Enterprise Linux 8 hosts to automatically attach subscriptions based on the system purpose. Click the Select Action list and select Manage Subscriptions . Click Auto-Attach to attach subscriptions to all selected hosts automatically based on their system role. 2.6. Changing a module stream for a host If you have a host running Red Hat Enterprise Linux 8, you can modify the module stream for the repositories you install. You can enable, disable, install, update, and remove module streams from your host in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the Content tab, then click the Module streams tab. Click the vertical ellipsis to the module and select the action you want to perform. You get a REX job notification once the remote execution job is complete. 2.7. Enabling custom repositories on content hosts As a Simple Content Access (SCA) user, you can enable all custom repositories on content hosts using the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select a host. Select the Content tab, then select Repository sets . From the dropdown, you can filter the Repository type column to Custom . Select the desired number of repositories or click the Select All checkbox to select all repositories, then click the vertical ellipsis, and select Override to Enabled . 2.8. Changing the content source of a host A content source is a Capsule that a host consumes content from. Use this procedure to change the content source for a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the vertical ellipsis icon to the Edit button and select Change content source . Select Content Source , Lifecycle Content View , and Content Source from the lists. Click Change content source . Note Some lifecycle environments can be unavailable for selection if they are not synced on the selected content source. For more information, see Adding lifecycle environments to Capsule Servers in Managing content . You can either complete the content source change using remote execution or manually. To update configuration on host using remote execution, click Run job invocation . For more information about running remote execution jobs, see Configuring and Setting up Remote Jobs . To update the content source manually, execute the autogenerated commands from Change content source on the host. 2.9. Changing the environment of a host Use this procedure to change the environment of a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the vertical ellipsis in the Content view details card and select Edit content view assignment . Select the environment. Select the content view. Click Save . 2.10. Changing the managed status of a host Hosts provisioned by Satellite are Managed by default. When a host is set to Managed, you can configure additional host parameters from Satellite Server. These additional parameters are listed on the Operating System tab. If you change any settings on the Operating System tab, they will not take effect until you set the host to build and reboot it. If you need to obtain reports about configuration management on systems using an operating system not supported by Satellite, set the host to Unmanaged. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Click Manage host or Unmanage host to change the host's status. Click Submit . 2.11. Enabling Tracer on a host Use this procedure to enable Tracer on Satellite and access Traces. Tracer displays a list of services and applications that need to be restarted. Traces is the output generated by Tracer in the Satellite web UI. Prerequisites Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Remote execution is enabled. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Traces tab, click Enable Traces . Select the provider to install katello-host-tools-tracer from the list. Click Enable Tracer . You get a REX job notification after the remote execution job is complete. 2.12. Restarting applications on a host Use this procedure to restart applications from the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the hosts you want to modify. Select the Traces tab. Select applications that you want to restart. Select Restart via remote execution from the Restart app list. You will get a REX job notification once the remote execution job is complete. 2.13. Assigning a host to a specific organization Use this procedure to assign a host to a specific organization. For general information about organizations and how to configure them, see Managing Organizations in Administering Red Hat Satellite . Note If your host is already registered with a different organization, you must first unregister the host before assigning it to a new organization. To unregister the host, run subscription-manager unregister on the host. After you assign the host to a new organization, you can re-register the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Organization . A new option window opens. From the Select Organization list, select the organization that you want to assign your host to. Select the checkbox Fix Organization on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the organization you want to assign the host to. The option Fix Organization on Mismatch will add such a resource to the organization, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one organization to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.14. Assigning a host to a specific location Use this procedure to assign a host to a specific location. For general information about locations and how to configure them, see Creating a Location in Managing content . Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Location . A new option window opens. Navigate to the Select Location list and choose the location that you want for your host. Select the checkbox Fix Location on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the location you want to assign the host to. The option Fix Location on Mismatch will add such a resource to the location, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one location to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.15. Switching between hosts When you are on a particular host in the Satellite web UI, you can navigate between hosts without leaving the page by using the host switcher. Click ⇄ to the hostname. This displays a list of hosts in alphabetical order with a pagination arrow and a search bar to find the host you are looking for. 2.16. Viewing host details from a content host Use this procedure to view the host details page from a content host. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts Click the content host you want to view. Select the Details tab to see the host details page. The cards in the Details tab show details for the System properties , BIOS , Networking interfaces , Operating system , Provisioning templates , and Provisioning . Registered content hosts show additional cards for Registration details , Installed products , and HW properties providing information about Model , Number of CPU(s) , Sockets , Cores per socket , and RAM . 2.17. Selecting host columns You can select what columns you want to see in the host table on the Hosts > All Hosts page. For a complete list of host columns, see Appendix D, Overview of the host columns . Note It is not possible to deselect the Name column. The Name column serves as a primary identification method of the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select columns that you want to display. You can select individual columns or column categories. Selecting or deselecting a category selects or deselects all columns in that category. Note Some columns are included in more than one category, but you can display a column of a specific type only once. By selecting or deselecting a specific column, you select or deselect all instances of that column. Verification You can now see the selected columns in the host table. 2.18. Removing a host from Satellite Use this procedure to remove a host from Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts or Hosts > Content Hosts . Note that there is no difference from what page you remove a host, from All Hosts or Content Hosts . In both cases, Satellite removes a host completely. Select the hosts that you want to remove. From the Select Action list, select Delete Hosts . Click Submit to remove the host from Satellite permanently. Warning By default, the Destroy associated VM on host delete setting is set to no . If a host record that is associated with a virtual machine is deleted, the virtual machine will remain on the compute resource. To delete a virtual machine on the compute resource, navigate to Administer > Settings and select the Provisioning tab. Setting Destroy associated VM on host delete to yes deletes the virtual machine if the host record that is associated with the virtual machine is deleted. To avoid deleting the virtual machine in this situation, disassociate the virtual machine from Satellite without removing it from the compute resource or change the setting. CLI procedure Delete your host from Satellite: Alternatively, you can use --name My_Host_Name instead of --id My_Host_ID . 2.18.1. Disassociating a virtual machine from Satellite without removing it from a hypervisor Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox to the left of the hosts that you want to disassociate. From the Select Action list, click Disassociate Hosts . Optional: Select the checkbox to keep the hosts for future action. Click Submit . 2.19. Lifecycle status of RHEL hosts Satellite provides multiple mechanisms to display information about upcoming End of Support (EOS) events for your Red Hat Enterprise Linux hosts: Notification banner A column on the Hosts index page Alert on the Hosts index page for each host that runs Red Hat Enterprise Linux with an upcoming EOS event in a year as well as when support has ended Ability to Search for hosts by EOS on the Hosts index page Host status card on the host details page For any hosts that are not running Red Hat Enterprise Linux, Satellite displays Unknown in the RHEL Lifecycle status and Last report columns. EOS notification banner When either the end of maintenance support or the end of extended lifecycle support approaches in a year, you will see a notification banner in the Satellite web UI if you have hosts with that Red Hat Enterprise Linux version. The notification provides information about the Red Hat Enterprise Linux version, the number of hosts running that version in your environment, the lifecycle support, and the expiration date. Along with other information, the Red Hat Enterprise Linux lifecycle column is visible in the notification. 2.19.1. Displaying RHEL lifecycle status You can display the status of the end of support (EOS) for your Red Hat Enterprise Linux hosts in the table on the Hosts index page. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select the Content column to expand it. Select RHEL Lifecycle status . Click Save to generate a new column that displays the Red Hat Enterprise Linux lifecycle status. 2.19.2. Host search by RHEL lifecycle status You can use the Search field to search hosts by rhel_lifecycle_status . It can have one of the following values: full_support maintenance_support approaching_end_of_maintenance extended_support approaching_end_of_support support_ended
[ "hammer host create --ask-root-password yes --hostgroup \" My_Host_Group \" --interface=\"primary=true, provision=true, mac= My_MAC_Address , ip= My_IP_Address \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \"", "subscription-manager syspurpose set usage ' Production ' subscription-manager syspurpose set role ' Red Hat Enterprise Linux Server ' subscription-manager syspurpose add addons ' your_addon '", "subscription-manager syspurpose", "subscription-manager attach --auto", "subscription-manager status", "hammer host delete --id My_Host_ID --location-id My_Location_ID --organization-id My_Organization_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/Administering_Hosts_managing-hosts
4.149. libtirpc
4.149. libtirpc 4.149.1. RHBA-2011:1745 - libtirpc bug fix update An updated libtirpc package that fixes one bug is now available for Red Hat Enterprise Linux 6. The libtirpc package contains SunLib's implementation of transport independent RPC (TI-RPC) documentation. This includes a library required by programs in the nfs-utils and rpcbind packages. Bug Fix BZ# 714015 Due to certain errors and missing code in libtirpc, user space NFS servers were not able to fully utilize the RPCSEC_GSS security protocol, which allows remote procedure call (RPC) protocols to access the Generic Security Services Application Programming Interface (GSS-API). With this update, the problems have been fixed in the libtirpc code. The RPCSEC_GSS protocol now can be used by NFS servers properly. All users of libtirpc are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libtirpc
4.272. ricci
4.272. ricci 4.272.1. RHBA-2011:1698 - ricci bug fix and enhancement update Updated ricci packages that fix multiple bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The ricci packages contain a daemon and a client for remote configuring and managing of clusters. Bug Fixes BZ# 697493 Prior to this update, the ccs_sync utility could not handle IPv6 addresses. This could prevent the cluster.conf file from being distributed to nodes. The ccs_sync utility has been modified to be able to recognize and use IPv6 addresses. Now, the cluster.conf file is distributed to all nodes correctly. BZ# 718230 The ccs tool did not add or list virtual machine services correctly when using the "ccs --addresource" command. This was caused by the virtual machine resource being incorrectly added in the "resources" tag instead of the "rm" tag. This problem has been fixed and virtual machine services are now added directly in the "rm" tag when using the ccs tool. BZ# 725722 Prior to this update, the /usr/share/ccs/cluster.rng schema file did not contain definition of the "suborg" option for the fence_cisco_ucs agent. As a consequence, the cluster.conf file was not changed when adding a fencing instance definition with the "suborg" option. With this update, the cluster.rng schema has been modified to match the schema present in the cman package. BZ# 721109 versions of ricci did not require the modcluster package even though it was needed for ricci to work correctly. With this update, ricci now requires modcluster to be installed. Enhancement BZ# 696901 The ccs utility can now parse metadata in /usr/share/cluster and lists all the services and fence devices available, as well as their options. All users of ricci are advised to upgrade to these updated ricci packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/ricci
Appendix B. Event Codes
Appendix B. Event Codes This table lists all event codes. Table B.1. Event codes Code Name Severity Message 0 UNASSIGNED Info 1 VDC_START Info Starting oVirt Engine. 2 VDC_STOP Info Stopping oVirt Engine. 12 VDS_FAILURE Error Host USD{VdsName} is non responsive. 13 VDS_DETECTED Info Status of host USD{VdsName} was set to USD{HostStatus}. 14 VDS_RECOVER Info Host USD{VdsName} is rebooting. 15 VDS_MAINTENANCE Normal Host USD{VdsName} was switched to Maintenance Mode. 16 VDS_ACTIVATE Info Activation of host USD{VdsName} initiated by USD{UserName}. 17 VDS_MAINTENANCE_FAILED Error Failed to switch Host USD{VdsName} to Maintenance mode. 18 VDS_ACTIVATE_FAILED Error Failed to activate Host USD{VdsName}.(User: USD{UserName}). 19 VDS_RECOVER_FAILED Error Host USD{VdsName} failed to recover. 20 USER_VDS_START Info Host USD{VdsName} was started by USD{UserName}. 21 USER_VDS_STOP Info Host USD{VdsName} was stopped by USD{UserName}. 22 IRS_FAILURE Error Failed to access Storage on Host USD{VdsName}. 23 VDS_LOW_DISK_SPACE Warning Warning, Low disk space. Host USD{VdsName} has less than USD{DiskSpace} MB of free space left on: USD{Disks}. 24 VDS_LOW_DISK_SPACE_ERROR Error Critical, Low disk space. Host USD{VdsName} has less than USD{DiskSpace} MB of free space left on: USD{Disks}. Low disk space might cause an issue upgrading this host. 25 VDS_NO_SELINUX_ENFORCEMENT Warning Host USD{VdsName} does not enforce SELinux. Current status: USD{Mode} 26 IRS_DISK_SPACE_LOW Warning Warning, Low disk space. USD{StorageDomainName} domain has USD{DiskSpace} GB of free space. 27 VDS_STATUS_CHANGE_FAILED_DUE_TO_STOP_SPM_FAILURE Warning Failed to change status of host USD{VdsName} due to a failure to stop the spm. 28 VDS_PROVISION Warning Installing OS on Host USD{VdsName} using Hostgroup USD{HostGroupName}. 29 USER_ADD_VM_TEMPLATE_SUCCESS Info Template USD{VmTemplateName} was created successfully. 31 USER_VDC_LOGOUT Info User USD{UserName} connected from 'USD{SourceIP}' using session 'USD{SessionID}' logged out. 32 USER_RUN_VM Info VM USD{VmName} started on Host USD{VdsName} 33 USER_STOP_VM Info VM USD{VmName} powered off by USD{UserName} (Host: USD{VdsName})USD{OptionalReason}. 34 USER_ADD_VM Info VM USD{VmName} was created by USD{UserName}. 35 USER_UPDATE_VM Info VM USD{VmName} configuration was updated by USD{UserName}. 36 USER_ADD_VM_TEMPLATE_FAILURE Error Failed creating Template USD{VmTemplateName}. 37 USER_ADD_VM_STARTED Info VM USD{VmName} creation was initiated by USD{UserName}. 38 USER_CHANGE_DISK_VM Info CD USD{DiskName} was inserted to VM USD{VmName} by USD{UserName}. 39 USER_PAUSE_VM Info VM USD{VmName} was suspended by USD{UserName} (Host: USD{VdsName}). 40 USER_RESUME_VM Info VM USD{VmName} was resumed by USD{UserName} (Host: USD{VdsName}). 41 USER_VDS_RESTART Info Host USD{VdsName} was restarted by USD{UserName}. 42 USER_ADD_VDS Info Host USD{VdsName} was added by USD{UserName}. 43 USER_UPDATE_VDS Info Host USD{VdsName} configuration was updated by USD{UserName}. 44 USER_REMOVE_VDS Info Host USD{VdsName} was removed by USD{UserName}. 45 USER_CREATE_SNAPSHOT Info Snapshot 'USD{SnapshotName}' creation for VM 'USD{VmName}' was initiated by USD{UserName}. 46 USER_TRY_BACK_TO_SNAPSHOT Info Snapshot-Preview USD{SnapshotName} for VM USD{VmName} was initiated by USD{UserName}. 47 USER_RESTORE_FROM_SNAPSHOT Info VM USD{VmName} restored from Snapshot by USD{UserName}. 48 USER_ADD_VM_TEMPLATE Info Creation of Template USD{VmTemplateName} from VM USD{VmName} was initiated by USD{UserName}. 49 USER_UPDATE_VM_TEMPLATE Info Template USD{VmTemplateName} configuration was updated by USD{UserName}. 50 USER_REMOVE_VM_TEMPLATE Info Removal of Template USD{VmTemplateName} was initiated by USD{UserName}. 51 USER_ADD_VM_TEMPLATE_FINISHED_SUCCESS Info Creation of Template USD{VmTemplateName} from VM USD{VmName} has been completed. 52 USER_ADD_VM_TEMPLATE_FINISHED_FAILURE Error Failed to complete creation of Template USD{VmTemplateName} from VM USD{VmName}. 53 USER_ADD_VM_FINISHED_SUCCESS Info VM USD{VmName} creation has been completed. 54 USER_FAILED_RUN_VM Error Failed to run VM USD{VmName}USD{DueToError} (User: USD{UserName}). 55 USER_FAILED_PAUSE_VM Error Failed to suspend VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 56 USER_FAILED_STOP_VM Error Failed to power off VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 57 USER_FAILED_ADD_VM Error Failed to create VM USD{VmName} (User: USD{UserName}). 58 USER_FAILED_UPDATE_VM Error Failed to update VM USD{VmName} (User: USD{UserName}). 59 USER_FAILED_REMOVE_VM Error 60 USER_ADD_VM_FINISHED_FAILURE Error Failed to complete VM USD{VmName} creation. 61 VM_DOWN Info VM USD{VmName} is down. USD{ExitMessage} 62 VM_MIGRATION_START Info Migration started (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}, User: USD{UserName}). USD{OptionalReason} 63 VM_MIGRATION_DONE Info Migration completed (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}, Duration: USD{Duration}, Total: USD{TotalDuration}, Actual downtime: USD{ActualDowntime}) 64 VM_MIGRATION_ABORT Error Migration failed: USD{MigrationError} (VM: USD{VmName}, Source: USD{VdsName}). 65 VM_MIGRATION_FAILED Error Migration failedUSD{DueToMigrationError} (VM: USD{VmName}, Source: USD{VdsName}). 66 VM_FAILURE Error VM USD{VmName} cannot be found on Host USD{VdsName}. 67 VM_MIGRATION_START_SYSTEM_INITIATED Info Migration initiated by system (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}, Reason: USD{OptionalReason}). 68 USER_CREATE_SNAPSHOT_FINISHED_SUCCESS Info Snapshot 'USD{SnapshotName}' creation for VM 'USD{VmName}' has been completed. 69 USER_CREATE_SNAPSHOT_FINISHED_FAILURE Error Failed to complete snapshot 'USD{SnapshotName}' creation for VM 'USD{VmName}'. 70 USER_RUN_VM_AS_STATELESS_FINISHED_FAILURE Error Failed to complete starting of VM USD{VmName}. 71 USER_TRY_BACK_TO_SNAPSHOT_FINISH_SUCCESS Info Snapshot-Preview USD{SnapshotName} for VM USD{VmName} has been completed. 72 MERGE_SNAPSHOTS_ON_HOST Info Merging snapshots (USD{SourceSnapshot} into USD{DestinationSnapshot}) of disk USD{Disk} on host USD{VDS} 73 USER_INITIATED_SHUTDOWN_VM Info VM shutdown initiated by USD{UserName} on VM USD{VmName} (Host: USD{VdsName})USD{OptionalReason}. 74 USER_FAILED_SHUTDOWN_VM Error Failed to initiate shutdown on VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 75 VDS_SOFT_RECOVER Info Soft fencing on host USD{VdsName} was successful. 76 USER_STOPPED_VM_INSTEAD_OF_SHUTDOWN Info VM USD{VmName} was powered off ungracefully by USD{UserName} (Host: USD{VdsName})USD{OptionalReason}. 77 USER_FAILED_STOPPING_VM_INSTEAD_OF_SHUTDOWN Error Failed to power off VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 78 USER_ADD_DISK_TO_VM Info Add-Disk operation of USD{DiskAlias} was initiated on VM USD{VmName} by USD{UserName}. 79 USER_FAILED_ADD_DISK_TO_VM Error Add-Disk operation failed on VM USD{VmName} (User: USD{UserName}). 80 USER_REMOVE_DISK_FROM_VM Info Disk was removed from VM USD{VmName} by USD{UserName}. 81 USER_FAILED_REMOVE_DISK_FROM_VM Error Failed to remove Disk from VM USD{VmName} (User: USD{UserName}). 88 USER_UPDATE_VM_DISK Info VM USD{VmName} USD{DiskAlias} disk was updated by USD{UserName}. 89 USER_FAILED_UPDATE_VM_DISK Error Failed to update VM USD{VmName} disk USD{DiskAlias} (User: USD{UserName}). 90 VDS_FAILED_TO_GET_HOST_HARDWARE_INFO Warning Could not get hardware information for host USD{VdsName} 94 USER_COMMIT_RESTORE_FROM_SNAPSHOT_START Info Committing a Snapshot-Preview for VM USD{VmName} was initialized by USD{UserName}. 95 USER_COMMIT_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS Info Committing a Snapshot-Preview for VM USD{VmName} has been completed. 96 USER_COMMIT_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE Error Failed to commit Snapshot-Preview for VM USD{VmName}. 97 USER_ADD_DISK_TO_VM_FINISHED_SUCCESS Info The disk USD{DiskAlias} was successfully added to VM USD{VmName}. 98 USER_ADD_DISK_TO_VM_FINISHED_FAILURE Error Add-Disk operation failed to complete on VM USD{VmName}. 99 USER_TRY_BACK_TO_SNAPSHOT_FINISH_FAILURE Error Failed to complete Snapshot-Preview USD{SnapshotName} for VM USD{VmName}. 100 USER_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS Info VM USD{VmName} restoring from Snapshot has been completed. 101 USER_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE Error Failed to complete restoring from Snapshot of VM USD{VmName}. 102 USER_FAILED_CHANGE_DISK_VM Error Failed to change disk in VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 103 USER_FAILED_RESUME_VM Error Failed to resume VM USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 104 USER_FAILED_ADD_VDS Error Failed to add Host USD{VdsName} (User: USD{UserName}). 105 USER_FAILED_UPDATE_VDS Error Failed to update Host USD{VdsName} (User: USD{UserName}). 106 USER_FAILED_REMOVE_VDS Error Failed to remove Host USD{VdsName} (User: USD{UserName}). 107 USER_FAILED_VDS_RESTART Error Failed to restart Host USD{VdsName}, (User: USD{UserName}). 108 USER_FAILED_ADD_VM_TEMPLATE Error Failed to initiate creation of Template USD{VmTemplateName} from VM USD{VmName} (User: USD{UserName}). 109 USER_FAILED_UPDATE_VM_TEMPLATE Error Failed to update Template USD{VmTemplateName} (User: USD{UserName}). 110 USER_FAILED_REMOVE_VM_TEMPLATE Error Failed to initiate removal of Template USD{VmTemplateName} (User: USD{UserName}). 111 USER_STOP_SUSPENDED_VM Info Suspended VM USD{VmName} has had its save state cleared by USD{UserName}USD{OptionalReason}. 112 USER_STOP_SUSPENDED_VM_FAILED Error Failed to power off suspended VM USD{VmName} (User: USD{UserName}). 113 USER_REMOVE_VM_FINISHED Info VM USD{VmName} was successfully removed. 115 USER_FAILED_TRY_BACK_TO_SNAPSHOT Error Failed to preview Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 116 USER_FAILED_RESTORE_FROM_SNAPSHOT Error Failed to restore VM USD{VmName} from Snapshot (User: USD{UserName}). 117 USER_FAILED_CREATE_SNAPSHOT Error Failed to create Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 118 USER_FAILED_VDS_START Error Failed to start Host USD{VdsName}, (User: USD{UserName}). 119 VM_DOWN_ERROR Error VM USD{VmName} is down with error. USD{ExitMessage}. 120 VM_MIGRATION_TO_SERVER_FAILED Error Migration failedUSD{DueToMigrationError} (VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}). 121 SYSTEM_VDS_RESTART Info Host USD{VdsName} was restarted by the engine. 122 SYSTEM_FAILED_VDS_RESTART Error A restart initiated by the engine to Host USD{VdsName} has failed. 123 VDS_SLOW_STORAGE_RESPONSE_TIME Warning Slow storage response time on Host USD{VdsName}. 124 VM_IMPORT Info Started VM import of USD{ImportedVmName} (User: USD{UserName}) 125 VM_IMPORT_FAILED Error Failed to import VM USD{ImportedVmName} (User: USD{UserName}) 126 VM_NOT_RESPONDING Warning VM USD{VmName} is not responding. 127 VDS_RUN_IN_NO_KVM_MODE Error Host USD{VdsName} running without virtualization hardware acceleration 128 VM_MIGRATION_TRYING_RERUN Warning Failed to migrate VM USD{VmName} to Host USD{DestinationVdsName}USD{DueToMigrationError}. Trying to migrate to another Host. 129 VM_CLEARED Info Unused 130 USER_SUSPEND_VM_FINISH_FAILURE_WILL_TRY_AGAIN Error Failed to complete suspending of VM USD{VmName}, will try again. 131 USER_EXPORT_VM Info VM USD{VmName} exported to USD{ExportPath} by USD{UserName} 132 USER_EXPORT_VM_FAILED Error Failed to export VM USD{VmName} to USD{ExportPath} (User: USD{UserName}) 133 USER_EXPORT_TEMPLATE Info Template USD{VmTemplateName} exported to USD{ExportPath} by USD{UserName} 134 USER_EXPORT_TEMPLATE_FAILED Error Failed to export Template USD{VmTemplateName} to USD{ExportPath} (User: USD{UserName}) 135 TEMPLATE_IMPORT Info Started Template import of USD{ImportedVmTemplateName} (User: USD{UserName}) 136 TEMPLATE_IMPORT_FAILED Error Failed to import Template USD{ImportedVmTemplateName} (User: USD{UserName}) 137 USER_FAILED_VDS_STOP Error Failed to stop Host USD{VdsName}, (User: USD{UserName}). 138 VM_PAUSED_ENOSPC Error VM USD{VmName} has been paused due to no Storage space error. 139 VM_PAUSED_ERROR Error VM USD{VmName} has been paused due to unknown storage error. 140 VM_MIGRATION_FAILED_DURING_MOVE_TO_MAINTENANCE Error Migration failedUSD{DueToMigrationError} while Host is in 'preparing for maintenance' state.\n Consider manual intervention\: stopping/migrating Vms as Host's state will not\n turn to maintenance while VMs are still running on it.(VM: USD{VmName}, Source: USD{VdsName}, Destination: USD{DestinationVdsName}). 141 VDS_VERSION_NOT_SUPPORTED_FOR_CLUSTER Error Host USD{VdsName} is installed with VDSM version (USD{VdsSupportedVersions}) and cannot join cluster USD{ClusterName} which is compatible with VDSM versions USD{CompatibilityVersion}. 142 VM_SET_TO_UNKNOWN_STATUS Warning VM USD{VmName} was set to the Unknown status. 143 VM_WAS_SET_DOWN_DUE_TO_HOST_REBOOT_OR_MANUAL_FENCE Info Vm USD{VmName} was shut down due to USD{VdsName} host reboot or manual fence 144 VM_IMPORT_INFO Info Value of field USD{FieldName} of imported VM USD{VmName} is USD{FieldValue}. The field is reset to the default value 145 VM_PAUSED_EIO Error VM USD{VmName} has been paused due to storage I/O problem. 146 VM_PAUSED_EPERM Error VM USD{VmName} has been paused due to storage permissions problem. 147 VM_POWER_DOWN_FAILED Warning Shutdown of VM USD{VmName} failed. 148 VM_MEMORY_UNDER_GUARANTEED_VALUE Error VM USD{VmName} on host USD{VdsName} was guaranteed USD{MemGuaranteed} MB but currently has USD{MemActual} MB 149 USER_ADD Info User 'USD{NewUserName}' was added successfully to the system. 150 USER_INITIATED_RUN_VM Info Starting VM USD{VmName} was initiated by USD{UserName}. 151 USER_INITIATED_RUN_VM_FAILED Warning Failed to run VM USD{VmName} on Host USD{VdsName}. 152 USER_RUN_VM_ON_NON_DEFAULT_VDS Warning Guest USD{VmName} started on Host USD{VdsName}. (Default Host parameter was ignored - assigned Host was not available). 153 USER_STARTED_VM Info VM USD{VmName} was started by USD{UserName} (Host: USD{VdsName}). 154 VDS_CLUSTER_VERSION_NOT_SUPPORTED Error Host USD{VdsName} is compatible with versions (USD{VdsSupportedVersions}) and cannot join Cluster USD{ClusterName} which is set to version USD{CompatibilityVersion}. 155 VDS_ARCHITECTURE_NOT_SUPPORTED_FOR_CLUSTER Error Host USD{VdsName} has architecture USD{VdsArchitecture} and cannot join Cluster USD{ClusterName} which has architecture USD{ClusterArchitecture}. 156 CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION Error Host USD{VdsName} moved to Non-Operational state as host CPU type is not supported in this cluster compatibility version or is not supported at all 157 USER_REBOOT_VM Info User USD{UserName} initiated reboot of VM USD{VmName}. 158 USER_FAILED_REBOOT_VM Error Failed to reboot VM USD{VmName} (User: USD{UserName}). 159 USER_FORCE_SELECTED_SPM Info Host USD{VdsName} was force selected by USD{UserName} 160 USER_ACCOUNT_DISABLED_OR_LOCKED Error User USD{UserName} cannot login, as it got disabled or locked. Please contact the system administrator. 161 VM_CANCEL_MIGRATION Info Migration cancelled (VM: USD{VmName}, Source: USD{VdsName}, User: USD{UserName}). 162 VM_CANCEL_MIGRATION_FAILED Error Failed to cancel migration for VM: USD{VmName} 163 VM_STATUS_RESTORED Info VM USD{VmName} status was restored to USD{VmStatus}. 164 VM_SET_TICKET Info User USD{UserName} initiated console session for VM USD{VmName} 165 VM_SET_TICKET_FAILED Error User USD{UserName} failed to initiate a console session for VM USD{VmName} 166 VM_MIGRATION_NO_VDS_TO_MIGRATE_TO Warning No available host was found to migrate VM USD{VmName} to. 167 VM_CONSOLE_CONNECTED Info User USD{UserName} is connected to VM USD{VmName}. 168 VM_CONSOLE_DISCONNECTED Info User USD{UserName} got disconnected from VM USD{VmName}. 169 VM_FAILED_TO_PRESTART_IN_POOL Warning Cannot pre-start VM in pool 'USD{VmPoolName}'. The system will continue trying. 170 USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE Warning Failed to create live snapshot 'USD{SnapshotName}' for VM 'USD{VmName}'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency. 171 USER_RUN_VM_AS_STATELESS_WITH_DISKS_NOT_ALLOWING_SNAPSHOT Warning VM USD{VmName} was run as stateless with one or more of disks that do not allow snapshots (User:USD{UserName}). 172 USER_REMOVE_VM_FINISHED_WITH_ILLEGAL_DISKS Warning VM USD{VmName} has been removed, but the following disks could not be removed: USD{DisksNames}. These disks will appear in the main disks tab in illegal state, please remove manually when possible. 173 USER_CREATE_LIVE_SNAPSHOT_NO_MEMORY_FAILURE Error Failed to save memory as part of Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 174 VM_IMPORT_FROM_CONFIGURATION_EXECUTED_SUCCESSFULLY Info VM USD{VmName} has been successfully imported from the given configuration. 175 VM_IMPORT_FROM_CONFIGURATION_ATTACH_DISKS_FAILED Warning VM USD{VmName} has been imported from the given configuration but the following disk(s) failed to attach: USD{DiskAliases}. 176 VM_BALLOON_DRIVER_ERROR Error The Balloon driver on VM USD{VmName} on host USD{VdsName} is requested but unavailable. 177 VM_BALLOON_DRIVER_UNCONTROLLED Error The Balloon device on VM USD{VmName} on host USD{VdsName} is inflated but the device cannot be controlled (guest agent is down). 178 VM_MEMORY_NOT_IN_RECOMMENDED_RANGE Warning VM USD{VmName} was configured with USD{VmMemInMb}MiB of memory while the recommended value range is USD{VmMinMemInMb}MiB - USD{VmMaxMemInMb}MiB 179 USER_INITIATED_RUN_VM_AND_PAUSE Info Starting in paused mode VM USD{VmName} was initiated by USD{UserName}. 180 TEMPLATE_IMPORT_FROM_CONFIGURATION_SUCCESS Info Template USD{VmTemplateName} has been successfully imported from the given configuration. 181 TEMPLATE_IMPORT_FROM_CONFIGURATION_FAILED Error Failed to import Template USD{VmTemplateName} from the given configuration. 182 USER_FAILED_ATTACH_USER_TO_VM Error Failed to attach User USD{AdUserName} to VM USD{VmName} (User: USD{UserName}). 183 USER_ATTACH_TAG_TO_TEMPLATE Info Tag USD{TagName} was attached to Templates(s) USD{TemplatesNames} by USD{UserName}. 184 USER_ATTACH_TAG_TO_TEMPLATE_FAILED Error Failed to attach Tag USD{TagName} to Templates(s) USD{TemplatesNames} (User: USD{UserName}). 185 USER_DETACH_TEMPLATE_FROM_TAG Info Tag USD{TagName} was detached from Template(s) USD{TemplatesNames} by USD{UserName}. 186 USER_DETACH_TEMPLATE_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from TEMPLATE(s) USD{TemplatesNames} (User: USD{UserName}). 187 VDS_STORAGE_CONNECTION_FAILED_BUT_LAST_VDS Error Failed to connect Host USD{VdsName} to Data Center, due to connectivity errors with the Storage. Host USD{VdsName} will remain in Up state (but inactive), as it is the last Host in the Data Center, to enable manual intervention by the Administrator. 188 VDS_STORAGES_CONNECTION_FAILED Error Failed to connect Host USD{VdsName} to the Storage Domains USD{failedStorageDomains}. 189 VDS_STORAGE_VDS_STATS_FAILED Error Host USD{VdsName} reports about one of the Active Storage Domains as Problematic. 190 UPDATE_OVF_FOR_STORAGE_DOMAIN_FAILED Warning Failed to update VMs/Templates OVF data for Storage Domain USD{StorageDomainName} in Data Center USD{StoragePoolName}. 191 CREATE_OVF_STORE_FOR_STORAGE_DOMAIN_FAILED Warning Failed to create OVF store disk for Storage Domain USD{StorageDomainName}.\n The Disk with the id USD{DiskId} might be removed manually for automatic attempt to create new one. \n OVF updates won't be attempted on the created disk. 192 CREATE_OVF_STORE_FOR_STORAGE_DOMAIN_INITIATE_FAILED Warning Failed to create OVF store disk for Storage Domain USD{StorageDomainName}. \n OVF data won't be updated meanwhile for that domain. 193 DELETE_OVF_STORE_FOR_STORAGE_DOMAIN_FAILED Warning Failed to delete the OVF store disk for Storage Domain USD{StorageDomainName}.\n In order to detach the domain please remove it manually or try to detach the domain again for another attempt. 194 VM_CANCEL_CONVERSION Info Conversion cancelled (VM: USD{VmName}, Source: USD{VdsName}, User: USD{UserName}). 195 VM_CANCEL_CONVERSION_FAILED Error Failed to cancel conversion for VM: USD{VmName} 196 VM_RECOVERED_FROM_PAUSE_ERROR Normal VM USD{VmName} has recovered from paused back to up. 197 SYSTEM_SSH_HOST_RESTART Info Host USD{VdsName} was restarted using SSH by the engine. 198 SYSTEM_FAILED_SSH_HOST_RESTART Error A restart using SSH initiated by the engine to Host USD{VdsName} has failed. 199 USER_UPDATE_OVF_STORE Info OVF_STORE for domain USD{StorageDomainName} was updated by USD{UserName}. 200 IMPORTEXPORT_GET_VMS_INFO_FAILED Error Failed to retrieve VM/Templates information from export domain USD{StorageDomainName} 201 IRS_DISK_SPACE_LOW_ERROR Error Critical, Low disk space. USD{StorageDomainName} domain has USD{DiskSpace} GB of free space. 202 IMPORTEXPORT_GET_EXTERNAL_VMS_INFO_FAILED Error Failed to retrieve VMs information from external server USD{URL} 204 IRS_HOSTED_ON_VDS Info Storage Pool Manager runs on Host USD{VdsName} (Address: USD{ServerIp}), Data Center USD{StoragePoolName}. 205 PROVIDER_ADDED Info Provider USD{ProviderName} was added. (User: USD{UserName}) 206 PROVIDER_ADDITION_FAILED Error Failed to add provider USD{ProviderName}. (User: USD{UserName}) 207 PROVIDER_UPDATED Info Provider USD{ProviderName} was updated. (User: USD{UserName}) 208 PROVIDER_UPDATE_FAILED Error Failed to update provider USD{ProviderName}. (User: USD{UserName}) 209 PROVIDER_REMOVED Info Provider USD{ProviderName} was removed. (User: USD{UserName}) 210 PROVIDER_REMOVAL_FAILED Error Failed to remove provider USD{ProviderName}. (User: USD{UserName}) 213 PROVIDER_CERTIFICATE_IMPORTED Info Certificate for provider USD{ProviderName} was imported. (User: USD{UserName}) 214 PROVIDER_CERTIFICATE_IMPORT_FAILED Error Failed importing Certificate for provider USD{ProviderName}. (User: USD{UserName}) 215 PROVIDER_SYNCHRONIZED Info 216 PROVIDER_SYNCHRONIZED_FAILED Error Failed to synchronize networks of Provider USD{ProviderName}. 217 PROVIDER_SYNCHRONIZED_PERFORMED Info Networks of Provider USD{ProviderName} were successfully synchronized. 218 PROVIDER_SYNCHRONIZED_PERFORMED_FAILED Error Networks of Provider USD{ProviderName} were incompletely synchronized. 219 PROVIDER_SYNCHRONIZED_DISABLED Error Failed to synchronize networks of Provider USD{ProviderName}, because the authentication information of the provider is invalid. Automatic synchronization is deactivated for this Provider. 250 USER_UPDATE_VM_CLUSTER_DEFAULT_HOST_CLEARED Info USD{VmName} cluster was updated by USD{UserName}, Default host was reset to auto assign. 251 USER_REMOVE_VM_TEMPLATE_FINISHED Info Removal of Template USD{VmTemplateName} has been completed. 252 SYSTEM_FAILED_UPDATE_VM Error Failed to Update VM USD{VmName} that was initiated by system. 253 SYSTEM_UPDATE_VM Info VM USD{VmName} configuration was updated by system. 254 VM_ALREADY_IN_REQUESTED_STATUS Info VM USD{VmName} is already USD{VmStatus}, USD{Action} was skipped. User: USD{UserName}. 302 USER_ADD_VM_POOL_WITH_VMS Info VM Pool USD{VmPoolName} (containing USD{VmsCount} VMs) was created by USD{UserName}. 303 USER_ADD_VM_POOL_WITH_VMS_FAILED Error Failed to create VM Pool USD{VmPoolName} (User: USD{UserName}). 304 USER_REMOVE_VM_POOL Info VM Pool USD{VmPoolName} was removed by USD{UserName}. 305 USER_REMOVE_VM_POOL_FAILED Error Failed to remove VM Pool USD{VmPoolName} (User: USD{UserName}). 306 USER_ADD_VM_TO_POOL Info VM USD{VmName} was added to VM Pool USD{VmPoolName} by USD{UserName}. 307 USER_ADD_VM_TO_POOL_FAILED Error Failed to add VM USD{VmName} to VM Pool USD{VmPoolName}(User: USD{UserName}). 308 USER_REMOVE_VM_FROM_POOL Info VM USD{VmName} was removed from VM Pool USD{VmPoolName} by USD{UserName}. 309 USER_REMOVE_VM_FROM_POOL_FAILED Error Failed to remove VM USD{VmName} from VM Pool USD{VmPoolName} (User: USD{UserName}). 310 USER_ATTACH_USER_TO_POOL Info User USD{AdUserName} was attached to VM Pool USD{VmPoolName} by USD{UserName}. 311 USER_ATTACH_USER_TO_POOL_FAILED Error Failed to attach User USD{AdUserName} to VM Pool USD{VmPoolName} (User: USD{UserName}). 312 USER_DETACH_USER_FROM_POOL Info User USD{AdUserName} was detached from VM Pool USD{VmPoolName} by USD{UserName}. 313 USER_DETACH_USER_FROM_POOL_FAILED Error Failed to detach User USD{AdUserName} from VM Pool USD{VmPoolName} (User: USD{UserName}). 314 USER_UPDATE_VM_POOL Info VM Pool USD{VmPoolName} configuration was updated by USD{UserName}. 315 USER_UPDATE_VM_POOL_FAILED Error Failed to update VM Pool USD{VmPoolName} configuration (User: USD{UserName}). 316 USER_ATTACH_USER_TO_VM_FROM_POOL Info Attaching User USD{AdUserName} to VM USD{VmName} in VM Pool USD{VmPoolName} was initiated by USD{UserName}. 317 USER_ATTACH_USER_TO_VM_FROM_POOL_FAILED Error Failed to attach User USD{AdUserName} to VM from VM Pool USD{VmPoolName} (User: USD{UserName}). 318 USER_ATTACH_USER_TO_VM_FROM_POOL_FINISHED_SUCCESS Info User USD{AdUserName} successfully attached to VM USD{VmName} in VM Pool USD{VmPoolName}. 319 USER_ATTACH_USER_TO_VM_FROM_POOL_FINISHED_FAILURE Error Failed to attach user USD{AdUserName} to VM USD{VmName} in VM Pool USD{VmPoolName}. 320 USER_ADD_VM_POOL_WITH_VMS_ADD_VDS_FAILED Error Pool USD{VmPoolName} Created, but some Vms failed to create (User: USD{UserName}). 321 USER_REMOVE_VM_POOL_INITIATED Info VM Pool USD{VmPoolName} removal was initiated by USD{UserName}. 325 USER_REMOVE_ADUSER Info User USD{AdUserName} was removed by USD{UserName}. 326 USER_FAILED_REMOVE_ADUSER Error Failed to remove User USD{AdUserName} (User: USD{UserName}). 327 USER_FAILED_ADD_ADUSER Warning Failed to add User 'USD{NewUserName}' to the system. 342 USER_REMOVE_SNAPSHOT Info Snapshot 'USD{SnapshotName}' deletion for VM 'USD{VmName}' was initiated by USD{UserName}. 343 USER_FAILED_REMOVE_SNAPSHOT Error Failed to remove Snapshot USD{SnapshotName} for VM USD{VmName} (User: USD{UserName}). 344 USER_UPDATE_VM_POOL_WITH_VMS Info VM Pool USD{VmPoolName} was updated by USD{UserName}, USD{VmsCount} VMs were added. 345 USER_UPDATE_VM_POOL_WITH_VMS_FAILED Error Failed to update VM Pool USD{VmPoolName}(User: USD{UserName}). 346 USER_PASSWORD_CHANGED Info Password changed successfully for USD{UserName} 347 USER_PASSWORD_CHANGE_FAILED Error Failed to change password. (User: USD{UserName}) 348 USER_CLEAR_UNKNOWN_VMS Info All VMs' status on Non Responsive Host USD{VdsName} were changed to 'Down' by USD{UserName} 349 USER_FAILED_CLEAR_UNKNOWN_VMS Error Failed to clear VMs' status on Non Responsive Host USD{VdsName}. (User: USD{UserName}). 350 USER_ADD_BOOKMARK Info Bookmark USD{BookmarkName} was added by USD{UserName}. 351 USER_ADD_BOOKMARK_FAILED Error Failed to add bookmark: USD{BookmarkName} (User: USD{UserName}). 352 USER_UPDATE_BOOKMARK Info Bookmark USD{BookmarkName} was updated by USD{UserName}. 353 USER_UPDATE_BOOKMARK_FAILED Error Failed to update bookmark: USD{BookmarkName} (User: USD{UserName}) 354 USER_REMOVE_BOOKMARK Info Bookmark USD{BookmarkName} was removed by USD{UserName}. 355 USER_REMOVE_BOOKMARK_FAILED Error Failed to remove bookmark USD{BookmarkName} (User: USD{UserName}) 356 USER_REMOVE_SNAPSHOT_FINISHED_SUCCESS Info Snapshot 'USD{SnapshotName}' deletion for VM 'USD{VmName}' has been completed. 357 USER_REMOVE_SNAPSHOT_FINISHED_FAILURE Error Failed to delete snapshot 'USD{SnapshotName}' for VM 'USD{VmName}'. 358 USER_VM_POOL_MAX_SUBSEQUENT_FAILURES_REACHED Warning Not all VMs where successfully created in VM Pool USD{VmPoolName}. 359 USER_REMOVE_SNAPSHOT_FINISHED_FAILURE_PARTIAL_SNAPSHOT Warning Due to partial snapshot removal, Snapshot 'USD{SnapshotName}' of VM 'USD{VmName}' now contains only the following disks: 'USD{DiskAliases}'. 360 USER_DETACH_USER_FROM_VM Info User USD{AdUserName} was detached from VM USD{VmName} by USD{UserName}. 361 USER_FAILED_DETACH_USER_FROM_VM Error Failed to detach User USD{AdUserName} from VM USD{VmName} (User: USD{UserName}). 362 USER_REMOVE_SNAPSHOT_FINISHED_FAILURE_BASE_IMAGE_NOT_FOUND Error Failed to merge images of snapshot 'USD{SnapshotName}': base volume 'USD{BaseVolumeId}' is missing. This may have been caused by a failed attempt to remove the parent snapshot; if this is the case, please retry deletion of the parent snapshot before deleting this one. 370 USER_EXTEND_DISK_SIZE_FAILURE Error Failed to extend size of the disk 'USD{DiskAlias}' to USD{NewSize} GB, User: USD{UserName}. 371 USER_EXTEND_DISK_SIZE_SUCCESS Info Size of the disk 'USD{DiskAlias}' was successfully updated to USD{NewSize} GB by USD{UserName}. 372 USER_EXTEND_DISK_SIZE_UPDATE_VM_FAILURE Warning Failed to update VM 'USD{VmName}' with the new volume size. VM restart is recommended. 373 USER_REMOVE_DISK_SNAPSHOT Info Disk 'USD{DiskAlias}' from Snapshot(s) 'USD{Snapshots}' of VM 'USD{VmName}' deletion was initiated by USD{UserName}. 374 USER_FAILED_REMOVE_DISK_SNAPSHOT Error Failed to delete Disk 'USD{DiskAlias}' from Snapshot(s) USD{Snapshots} of VM USD{VmName} (User: USD{UserName}). 375 USER_REMOVE_DISK_SNAPSHOT_FINISHED_SUCCESS Info Disk 'USD{DiskAlias}' from Snapshot(s) 'USD{Snapshots}' of VM 'USD{VmName}' deletion has been completed (User: USD{UserName}). 376 USER_REMOVE_DISK_SNAPSHOT_FINISHED_FAILURE Error Failed to complete deletion of Disk 'USD{DiskAlias}' from snapshot(s) 'USD{Snapshots}' of VM 'USD{VmName}' (User: USD{UserName}). 377 USER_EXTENDED_DISK_SIZE Info Extending disk 'USD{DiskAlias}' to USD{NewSize} GB was initiated by USD{UserName}. 378 USER_REGISTER_DISK_FINISHED_SUCCESS Info Disk 'USD{DiskAlias}' has been successfully registered as a floating disk. 379 USER_REGISTER_DISK_FINISHED_FAILURE Error Failed to register Disk 'USD{DiskAlias}'. 380 USER_EXTEND_DISK_SIZE_UPDATE_HOST_FAILURE Warning Failed to refresh volume size on host 'USD{VdsName}'. Please try the operation again. 381 USER_REGISTER_DISK_INITIATED Info Registering Disk 'USD{DiskAlias}' has been initiated. 382 USER_REDUCE_DISK_FINISHED_SUCCESS Info Disk 'USD{DiskAlias}' has been successfully reduced. 383 USER_REDUCE_DISK_FINISHED_FAILURE Error Failed to reduce Disk 'USD{DiskAlias}'. 400 USER_ATTACH_VM_TO_AD_GROUP Info Group USD{GroupName} was attached to VM USD{VmName} by USD{UserName}. 401 USER_ATTACH_VM_TO_AD_GROUP_FAILED Error Failed to attach Group USD{GroupName} to VM USD{VmName} (User: USD{UserName}). 402 USER_DETACH_VM_TO_AD_GROUP Info Group USD{GroupName} was detached from VM USD{VmName} by USD{UserName}. 403 USER_DETACH_VM_TO_AD_GROUP_FAILED Error Failed to detach Group USD{GroupName} from VM USD{VmName} (User: USD{UserName}). 404 USER_ATTACH_VM_POOL_TO_AD_GROUP Info Group USD{GroupName} was attached to VM Pool USD{VmPoolName} by USD{UserName}. 405 USER_ATTACH_VM_POOL_TO_AD_GROUP_FAILED Error Failed to attach Group USD{GroupName} to VM Pool USD{VmPoolName} (User: USD{UserName}). 406 USER_DETACH_VM_POOL_TO_AD_GROUP Info Group USD{GroupName} was detached from VM Pool USD{VmPoolName} by USD{UserName}. 407 USER_DETACH_VM_POOL_TO_AD_GROUP_FAILED Error Failed to detach Group USD{GroupName} from VM Pool USD{VmPoolName} (User: USD{UserName}). 408 USER_REMOVE_AD_GROUP Info Group USD{GroupName} was removed by USD{UserName}. 409 USER_REMOVE_AD_GROUP_FAILED Error Failed to remove group USD{GroupName} (User: USD{UserName}). 430 USER_UPDATE_TAG Info Tag USD{TagName} configuration was updated by USD{UserName}. 431 USER_UPDATE_TAG_FAILED Error Failed to update Tag USD{TagName} (User: USD{UserName}). 432 USER_ADD_TAG Info New Tag USD{TagName} was created by USD{UserName}. 433 USER_ADD_TAG_FAILED Error Failed to create Tag named USD{TagName} (User: USD{UserName}). 434 USER_REMOVE_TAG Info Tag USD{TagName} was removed by USD{UserName}. 435 USER_REMOVE_TAG_FAILED Error Failed to remove Tag USD{TagName} (User: USD{UserName}). 436 USER_ATTACH_TAG_TO_USER Info Tag USD{TagName} was attached to User(s) USD{AttachUsersNames} by USD{UserName}. 437 USER_ATTACH_TAG_TO_USER_FAILED Error Failed to attach Tag USD{TagName} to User(s) USD{AttachUsersNames} (User: USD{UserName}). 438 USER_ATTACH_TAG_TO_USER_GROUP Info Tag USD{TagName} was attached to Group(s) USD{AttachGroupsNames} by USD{UserName}. 439 USER_ATTACH_TAG_TO_USER_GROUP_FAILED Error Failed to attach Group(s) USD{AttachGroupsNames} to Tag USD{TagName} (User: USD{UserName}). 440 USER_ATTACH_TAG_TO_VM Info Tag USD{TagName} was attached to VM(s) USD{VmsNames} by USD{UserName}. 441 USER_ATTACH_TAG_TO_VM_FAILED Error Failed to attach Tag USD{TagName} to VM(s) USD{VmsNames} (User: USD{UserName}). 442 USER_ATTACH_TAG_TO_VDS Info Tag USD{TagName} was attached to Host(s) USD{VdsNames} by USD{UserName}. 443 USER_ATTACH_TAG_TO_VDS_FAILED Error Failed to attach Tag USD{TagName} to Host(s) USD{VdsNames} (User: USD{UserName}). 444 USER_DETACH_VDS_FROM_TAG Info Tag USD{TagName} was detached from Host(s) USD{VdsNames} by USD{UserName}. 445 USER_DETACH_VDS_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from Host(s) USD{VdsNames} (User: USD{UserName}). 446 USER_DETACH_VM_FROM_TAG Info Tag USD{TagName} was detached from VM(s) USD{VmsNames} by USD{UserName}. 447 USER_DETACH_VM_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from VM(s) USD{VmsNames} (User: USD{UserName}). 448 USER_DETACH_USER_FROM_TAG Info Tag USD{TagName} detached from User(s) USD{DetachUsersNames} by USD{UserName}. 449 USER_DETACH_USER_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from User(s) USD{DetachUsersNames} (User: USD{UserName}). 450 USER_DETACH_USER_GROUP_FROM_TAG Info Tag USD{TagName} was detached from Group(s) USD{DetachGroupsNames} by USD{UserName}. 451 USER_DETACH_USER_GROUP_FROM_TAG_FAILED Error Failed to detach Tag USD{TagName} from Group(s) USD{DetachGroupsNames} (User: USD{UserName}). 452 USER_ATTACH_TAG_TO_USER_EXISTS Warning Tag USD{TagName} already attached to User(s) USD{AttachUsersNamesExists}. 453 USER_ATTACH_TAG_TO_USER_GROUP_EXISTS Warning Tag USD{TagName} already attached to Group(s) USD{AttachGroupsNamesExists}. 454 USER_ATTACH_TAG_TO_VM_EXISTS Warning Tag USD{TagName} already attached to VM(s) USD{VmsNamesExists}. 455 USER_ATTACH_TAG_TO_VDS_EXISTS Warning Tag USD{TagName} already attached to Host(s) USD{VdsNamesExists}. 456 USER_LOGGED_IN_VM Info User USD{GuestUser} logged in to VM USD{VmName}. 457 USER_LOGGED_OUT_VM Info User USD{GuestUser} logged out from VM USD{VmName}. 458 USER_LOCKED_VM Info User USD{GuestUser} locked VM USD{VmName}. 459 USER_UNLOCKED_VM Info User USD{GuestUser} unlocked VM USD{VmName}. 460 USER_ATTACH_TAG_TO_TEMPLATE_EXISTS Warning Tag USD{TagName} already attached to Template(s) USD{TemplatesNamesExists}. 467 UPDATE_TAGS_VM_DEFAULT_DISPLAY_TYPE Info Vm USD{VmName} tag default display type was updated 468 UPDATE_TAGS_VM_DEFAULT_DISPLAY_TYPE_FAILED Info Failed to update Vm USD{VmName} tag default display type 470 USER_ATTACH_VM_POOL_TO_AD_GROUP_INTERNAL Info Group USD{GroupName} was attached to VM Pool USD{VmPoolName}. 471 USER_ATTACH_VM_POOL_TO_AD_GROUP_FAILED_INTERNAL Error Failed to attach Group USD{GroupName} to VM Pool USD{VmPoolName}. 472 USER_ATTACH_USER_TO_POOL_INTERNAL Info User USD{AdUserName} was attached to VM Pool USD{VmPoolName}. 473 USER_ATTACH_USER_TO_POOL_FAILED_INTERNAL Error Failed to attach User USD{AdUserName} to VM Pool USD{VmPoolName} (User: USD{UserName}). 493 VDS_ALREADY_IN_REQUESTED_STATUS Warning Host USD{HostName} is already USD{AgentStatus}, Power Management USD{Operation} operation skipped. 494 VDS_MANUAL_FENCE_STATUS Info Manual fence for host USD{VdsName} was started. 495 VDS_MANUAL_FENCE_STATUS_FAILED Error Manual fence for host USD{VdsName} failed. 496 VDS_FENCE_STATUS Info Host USD{VdsName} power management was verified successfully. 497 VDS_FENCE_STATUS_FAILED Error Failed to verify Host USD{VdsName} power management. 498 VDS_APPROVE Info Host USD{VdsName} was successfully approved by user USD{UserName}. 499 VDS_APPROVE_FAILED Error Failed to approve Host USD{VdsName}. 500 VDS_FAILED_TO_RUN_VMS Error Host USD{VdsName} will be switched to Error status for USD{Time} minutes because it failed to run a VM. 501 USER_SUSPEND_VM Info Suspending VM USD{VmName} was initiated by User USD{UserName} (Host: USD{VdsName}). 502 USER_FAILED_SUSPEND_VM Error Failed to suspend VM USD{VmName} (Host: USD{VdsName}). 503 USER_SUSPEND_VM_OK Info VM USD{VmName} on Host USD{VdsName} is suspended. 504 VDS_INSTALL Info Host USD{VdsName} installed 505 VDS_INSTALL_FAILED Error Host USD{VdsName} installation failed. USD{FailedInstallMessage}. 506 VDS_INITIATED_RUN_VM Info Trying to restart VM USD{VmName} on Host USD{VdsName} 509 VDS_INSTALL_IN_PROGRESS Info Installing Host USD{VdsName}. USD{Message}. 510 VDS_INSTALL_IN_PROGRESS_WARNING Warning Host USD{VdsName} installation in progress . USD{Message}. 511 VDS_INSTALL_IN_PROGRESS_ERROR Error An error has occurred during installation of Host USD{VdsName}: USD{Message}. 512 USER_SUSPEND_VM_FINISH_SUCCESS Info Suspending VM USD{VmName} has been completed. 513 VDS_RECOVER_FAILED_VMS_UNKNOWN Error Host USD{VdsName} cannot be reached, VMs state on this host are marked as Unknown. 514 VDS_INITIALIZING Warning Host USD{VdsName} is initializing. Message: USD{ErrorMessage} 515 VDS_CPU_LOWER_THAN_CLUSTER Warning Host USD{VdsName} moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : USD{CpuFlags} 516 VDS_CPU_RETRIEVE_FAILED Warning Failed to determine Host USD{VdsName} CPU level - could not retrieve CPU flags. 517 VDS_SET_NONOPERATIONAL Info Host USD{VdsName} moved to Non-Operational state. 518 VDS_SET_NONOPERATIONAL_FAILED Error Failed to move Host USD{VdsName} to Non-Operational state. 519 VDS_SET_NONOPERATIONAL_NETWORK Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} networks, the following networks are missing on host: 'USD{Networks}' 520 USER_ATTACH_USER_TO_VM Info User USD{AdUserName} was attached to VM USD{VmName} by USD{UserName}. 521 USER_SUSPEND_VM_FINISH_FAILURE Error Failed to complete suspending of VM USD{VmName}. 522 VDS_SET_NONOPERATIONAL_DOMAIN Warning Host USD{VdsName} cannot access the Storage Domain(s) USD{StorageDomainNames} attached to the Data Center USD{StoragePoolName}. Setting Host state to Non-Operational. 523 VDS_SET_NONOPERATIONAL_DOMAIN_FAILED Error Host USD{VdsName} cannot access the Storage Domain(s) USD{StorageDomainNames} attached to the Data Center USD{StoragePoolName}. Failed to set Host state to Non-Operational. 524 VDS_DOMAIN_DELAY_INTERVAL Warning Storage domain USD{StorageDomainName} experienced a high latency of USD{Delay} seconds from host USD{VdsName}. This may cause performance and functional issues. Please consult your Storage Administrator. 525 VDS_INITIATED_RUN_AS_STATELESS_VM_NOT_YET_RUNNING Info Starting VM USD{VmName} as stateless was initiated. 528 USER_EJECT_VM_DISK Info CD was ejected from VM USD{VmName} by USD{UserName}. 530 VDS_MANUAL_FENCE_FAILED_CALL_FENCE_SPM Warning Manual fence did not revoke the selected SPM (USD{VdsName}) since the master storage domain\n was not active or could not use another host for the fence operation. 531 VDS_LOW_MEM Warning Available memory of host USD{HostName} in cluster USD{Cluster} [USD{AvailableMemory} MB] is under defined threshold [USD{Threshold} MB]. 532 VDS_HIGH_MEM_USE Warning Used memory of host USD{HostName} in cluster USD{Cluster} [USD{UsedMemory}%] exceeded defined threshold [USD{Threshold}%]. 533 VDS_HIGH_NETWORK_USE Warning 534 VDS_HIGH_CPU_USE Warning Used CPU of host USD{HostName} [USD{UsedCpu}%] exceeded defined threshold [USD{Threshold}%]. 535 VDS_HIGH_SWAP_USE Warning Used swap memory of host USD{HostName} [USD{UsedSwap}%] exceeded defined threshold [USD{Threshold}%]. 536 VDS_LOW_SWAP Warning Available swap memory of host USD{HostName} [USD{AvailableSwapMemory} MB] is under defined threshold [USD{Threshold} MB]. 537 VDS_INITIATED_RUN_VM_AS_STATELESS Info VM USD{VmName} was restarted on Host USD{VdsName} as stateless 538 USER_RUN_VM_AS_STATELESS Info VM USD{VmName} started on Host USD{VdsName} as stateless 539 VDS_AUTO_FENCE_STATUS Info Auto fence for host USD{VdsName} was started. 540 VDS_AUTO_FENCE_STATUS_FAILED Error Auto fence for host USD{VdsName} failed. 541 VDS_AUTO_FENCE_FAILED_CALL_FENCE_SPM Warning Auto fence did not revoke the selected SPM (USD{VdsName}) since the master storage domain\n was not active or could not use another host for the fence operation. 550 VDS_PACKAGES_IN_PROGRESS Info Package update Host USD{VdsName}. USD{Message}. 551 VDS_PACKAGES_IN_PROGRESS_WARNING Warning Host USD{VdsName} update packages in progress . USD{Message}. 552 VDS_PACKAGES_IN_PROGRESS_ERROR Error Failed to update packages Host USD{VdsName}. USD{Message}. 555 USER_MOVE_TAG Info Tag USD{TagName} was moved from USD{OldParnetTagName} to USD{NewParentTagName} by USD{UserName}. 556 USER_MOVE_TAG_FAILED Error Failed to move Tag USD{TagName} from USD{OldParnetTagName} to USD{NewParentTagName} (User: USD{UserName}). 560 VDS_ANSIBLE_INSTALL_STARTED Info Ansible host-deploy playbook execution has started on host USD{VdsName}. 561 VDS_ANSIBLE_INSTALL_FINISHED Info Ansible host-deploy playbook execution has successfully finished on host USD{VdsName}. 562 VDS_ANSIBLE_HOST_REMOVE_STARTED Info Ansible host-remove playbook execution started on host USD{VdsName}. 563 VDS_ANSIBLE_HOST_REMOVE_FINISHED Info Ansible host-remove playbook execution has successfully finished on host USD{VdsName}. For more details check log USD{LogFile} 564 VDS_ANSIBLE_HOST_REMOVE_FAILED Warning Ansible host-remove playbook execution failed on host USD{VdsName}. For more details please check log USD{LogFile} 565 VDS_ANSIBLE_HOST_REMOVE_EXECUTION_FAILED Info Ansible host-remove playbook execution failed on host USD{VdsName} with message: USD{Message} 600 USER_VDS_MAINTENANCE Info Host USD{VdsName} was switched to Maintenance mode by USD{UserName} (Reason: USD{Reason}). 601 CPU_FLAGS_NX_IS_MISSING Warning Host USD{VdsName} is missing the NX cpu flag. This flag can be enabled via the host BIOS. Please set Disable Execute (XD) for an Intel host, or No Execute (NX) for AMD. Please make sure to completely power off the host for this change to take effect. 602 USER_VDS_MAINTENANCE_MIGRATION_FAILED Warning Host USD{VdsName} cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: USD{failedVms} (User: USD{UserName}). 603 VDS_SET_NONOPERATIONAL_IFACE_DOWN Warning Host USD{VdsName} moved to Non-Operational state because interfaces which are down are needed by required networks in the current cluster: 'USD{NicsWithNetworks}'. 604 VDS_TIME_DRIFT_ALERT Warning Host USD{VdsName} has time-drift of USD{Actual} seconds while maximum configured value is USD{Max} seconds. 605 PROXY_HOST_SELECTION Info Host USD{Proxy} from USD{Origin} was chosen as a proxy to execute fencing on Host USD{VdsName}. 606 HOST_REFRESHED_CAPABILITIES Info Successfully refreshed the capabilities of host USD{VdsName}. 607 HOST_REFRESH_CAPABILITIES_FAILED Error Failed to refresh the capabilities of host USD{VdsName}. 608 HOST_INTERFACE_HIGH_NETWORK_USE Warning Host USD{HostName} has network interface which exceeded the defined threshold [USD{Threshold}%] (USD{InterfaceName}: transmit rate[USD{TransmitRate}%], receive rate [USD{ReceiveRate}%]) 609 HOST_INTERFACE_STATE_UP Normal Interface USD{InterfaceName} on host USD{VdsName}, changed state to up 610 HOST_INTERFACE_STATE_DOWN Warning Interface USD{InterfaceName} on host USD{VdsName}, changed state to down 611 HOST_BOND_SLAVE_STATE_UP Normal Slave USD{SlaveName} of bond USD{BondName} on host USD{VdsName}, changed state to up 612 HOST_BOND_SLAVE_STATE_DOWN Warning Slave USD{SlaveName} of bond USD{BondName} on host USD{VdsName}, changed state to down 613 FENCE_KDUMP_LISTENER_IS_NOT_ALIVE Error Unable to determine if Kdump is in progress on host USD{VdsName}, because fence_kdump listener is not running. 614 KDUMP_FLOW_DETECTED_ON_VDS Info Kdump flow is in progress on host USD{VdsName}. 615 KDUMP_FLOW_NOT_DETECTED_ON_VDS Info Kdump flow is not in progress on host USD{VdsName}. 616 KDUMP_FLOW_FINISHED_ON_VDS Info Kdump flow finished on host USD{VdsName}. 617 KDUMP_DETECTION_NOT_CONFIGURED_ON_VDS Warning Kdump integration is enabled for host USD{VdsName}, but kdump is not configured properly on host. 618 HOST_REGISTRATION_FAILED_INVALID_CLUSTER Info No default or valid cluster was found, Host USD{VdsName} registration failed 619 HOST_PROTOCOL_INCOMPATIBLE_WITH_CLUSTER Warning Host USD{VdsName} uses not compatible protocol during activation (xmlrpc instead of jsonrpc). Please examine installation logs and VDSM logs for failures and reinstall the host. 620 USER_VDS_MAINTENANCE_WITHOUT_REASON Info Host USD{VdsName} was switched to Maintenance mode by USD{UserName}. 650 USER_UNDO_RESTORE_FROM_SNAPSHOT_START Info Undoing a Snapshot-Preview for VM USD{VmName} was initialized by USD{UserName}. 651 USER_UNDO_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS Info Undoing a Snapshot-Preview for VM USD{VmName} has been completed. 652 USER_UNDO_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE Error Failed to undo Snapshot-Preview for VM USD{VmName}. 700 DISK_ALIGNMENT_SCAN_START Info Starting alignment scan of disk 'USD{DiskAlias}'. 701 DISK_ALIGNMENT_SCAN_FAILURE Warning Alignment scan of disk 'USD{DiskAlias}' failed. 702 DISK_ALIGNMENT_SCAN_SUCCESS Info Alignment scan of disk 'USD{DiskAlias}' is complete. 809 USER_ADD_CLUSTER Info Cluster USD{ClusterName} was added by USD{UserName} 810 USER_ADD_CLUSTER_FAILED Error Failed to add Host cluster (User: USD{UserName}) 811 USER_UPDATE_CLUSTER Info Host cluster USD{ClusterName} was updated by USD{UserName} 812 USER_UPDATE_CLUSTER_FAILED Error Failed to update Host cluster (User: USD{UserName}) 813 USER_REMOVE_CLUSTER Info Host cluster USD{ClusterName} was removed by USD{UserName} 814 USER_REMOVE_CLUSTER_FAILED Error Failed to remove Host cluster (User: USD{UserName}) 815 USER_VDC_LOGOUT_FAILED Error Failed to log out user USD{UserName} connected from 'USD{SourceIP}' using session 'USD{SessionID}'. 816 MAC_POOL_EMPTY Warning No MAC addresses left in the MAC Address Pool. 817 CERTIFICATE_FILE_NOT_FOUND Error Could not find oVirt Engine Certificate file. 818 RUN_VM_FAILED Error Cannot run VM USD{VmName} on Host USD{VdsName}. Error: USD{ErrMsg} 819 VDS_REGISTER_ERROR_UPDATING_HOST Error Host registration failed - cannot update Host Name for Host USD{VdsName2}. (Host: USD{VdsName1}) 820 VDS_REGISTER_ERROR_UPDATING_HOST_ALL_TAKEN Error Host registration failed - all available Host Names are taken. (Host: USD{VdsName1}) 821 VDS_REGISTER_HOST_IS_ACTIVE Error Host registration failed - cannot change Host Name of active Host USD{VdsName2}. (Host: USD{VdsName1}) 822 VDS_REGISTER_ERROR_UPDATING_NAME Error Host registration failed - cannot update Host Name for Host USD{VdsName2}. (Host: USD{VdsName1}) 823 VDS_REGISTER_ERROR_UPDATING_NAMES_ALL_TAKEN Error Host registration failed - all available Host Names are taken. (Host: USD{VdsName1}) 824 VDS_REGISTER_NAME_IS_ACTIVE Error Host registration failed - cannot change Host Name of active Host USD{VdsName2}. (Host: USD{VdsName1}) 825 VDS_REGISTER_AUTO_APPROVE_PATTERN Error Host registration failed - auto approve pattern error. (Host: USD{VdsName1}) 826 VDS_REGISTER_FAILED Error Host registration failed. (Host: USD{VdsName1}) 827 VDS_REGISTER_EXISTING_VDS_UPDATE_FAILED Error Host registration failed - cannot update existing Host. (Host: USD{VdsName1}) 828 VDS_REGISTER_SUCCEEDED Info Host USD{VdsName1} registered. 829 VM_MIGRATION_ON_CONNECT_CHECK_FAILED Error VM migration logic failed. (VM name: USD{VmName}) 830 VM_MIGRATION_ON_CONNECT_CHECK_SUCCEEDED Info Migration check failed to execute. 831 USER_VDC_SESSION_TERMINATED Info User USD{UserName} forcibly logged out user USD{TerminatedSessionUsername} connected from 'USD{SourceIP}' using session 'USD{SessionID}'. 832 USER_VDC_SESSION_TERMINATION_FAILED Error User USD{UserName} failed to forcibly log out user USD{TerminatedSessionUsername} connected from 'USD{SourceIP}' using session 'USD{SessionID}'. 833 MAC_ADDRESS_IS_IN_USE Warning Network Interface USD{IfaceName} has MAC address USD{MACAddr} which is in use. 834 VDS_REGISTER_EMPTY_ID Warning Host registration failed, empty host id (Host: USD{VdsHostName}) 835 SYSTEM_UPDATE_CLUSTER Info Host cluster USD{ClusterName} was updated by system 836 SYSTEM_UPDATE_CLUSTER_FAILED Info Failed to update Host cluster by system 837 MAC_ADDRESSES_POOL_NOT_INITIALIZED Warning Mac Address Pool is not initialized. USD{Message} 838 MAC_ADDRESS_IS_IN_USE_UNPLUG Warning Network Interface USD{IfaceName} has MAC address USD{MACAddr} which is in use, therefore it is being unplugged from VM USD{VmName}. 839 HOST_AVAILABLE_UPDATES_FAILED Error Failed to check for available updates on host USD{VdsName} with message 'USD{Message}'. 840 HOST_UPGRADE_STARTED Info Host USD{VdsName} upgrade was started (User: USD{UserName}). 841 HOST_UPGRADE_FAILED Error Failed to upgrade Host USD{VdsName} (User: USD{UserName}). 842 HOST_UPGRADE_FINISHED Info Host USD{VdsName} upgrade was completed successfully. 845 HOST_CERTIFICATION_IS_ABOUT_TO_EXPIRE Warning Host USD{VdsName} certification is about to expire at USD{ExpirationDate}. Please renew the host's certification. 846 ENGINE_CERTIFICATION_HAS_EXPIRED Info Engine's certification has expired at USD{ExpirationDate}. Please renew the engine's certification. 847 ENGINE_CERTIFICATION_IS_ABOUT_TO_EXPIRE Warning Engine's certification is about to expire at USD{ExpirationDate}. Please renew the engine's certification. 848 ENGINE_CA_CERTIFICATION_HAS_EXPIRED Info Engine's CA certification has expired at USD{ExpirationDate}. 849 ENGINE_CA_CERTIFICATION_IS_ABOUT_TO_EXPIRE Warning Engine's CA certification is about to expire at USD{ExpirationDate}. 850 USER_ADD_PERMISSION Info User/Group USD{SubjectName}, Namespace USD{Namespace}, Authorization provider: USD{Authz} was granted permission for Role USD{RoleName} on USD{VdcObjectType} USD{VdcObjectName}, by USD{UserName}. 851 USER_ADD_PERMISSION_FAILED Error User USD{UserName} failed to grant permission for Role USD{RoleName} on USD{VdcObjectType} USD{VdcObjectName} to User/Group USD{SubjectName}. 852 USER_REMOVE_PERMISSION Info User/Group USD{SubjectName} Role USD{RoleName} permission was removed from USD{VdcObjectType} USD{VdcObjectName} by USD{UserName} 853 USER_REMOVE_PERMISSION_FAILED Error User USD{UserName} failed to remove permission for Role USD{RoleName} from USD{VdcObjectType} USD{VdcObjectName} to User/Group USD{SubjectName} 854 USER_ADD_ROLE Info Role USD{RoleName} granted to USD{UserName} 855 USER_ADD_ROLE_FAILED Error Failed to grant role USD{RoleName} (User USD{UserName}) 856 USER_UPDATE_ROLE Info USD{UserName} Role was updated to the USD{RoleName} Role 857 USER_UPDATE_ROLE_FAILED Error Failed to update role USD{RoleName} to USD{UserName} 858 USER_REMOVE_ROLE Info Role USD{RoleName} removed from USD{UserName} 859 USER_REMOVE_ROLE_FAILED Error Failed to remove role USD{RoleName} (User USD{UserName}) 860 USER_ATTACHED_ACTION_GROUP_TO_ROLE Info Action group USD{ActionGroup} was attached to Role USD{RoleName} by USD{UserName} 861 USER_ATTACHED_ACTION_GROUP_TO_ROLE_FAILED Error Failed to attach Action group USD{ActionGroup} to Role USD{RoleName} (User: USD{UserName}) 862 USER_DETACHED_ACTION_GROUP_FROM_ROLE Info Action group USD{ActionGroup} was detached from Role USD{RoleName} by USD{UserName} 863 USER_DETACHED_ACTION_GROUP_FROM_ROLE_FAILED Error Failed to attach Action group USD{ActionGroup} to Role USD{RoleName} by USD{UserName} 864 USER_ADD_ROLE_WITH_ACTION_GROUP Info Role USD{RoleName} was added by USD{UserName} 865 USER_ADD_ROLE_WITH_ACTION_GROUP_FAILED Error Failed to add role USD{RoleName} 866 USER_ADD_SYSTEM_PERMISSION Info User/Group USD{SubjectName} was granted permission for Role USD{RoleName} on USD{VdcObjectType} by USD{UserName}. 867 USER_ADD_SYSTEM_PERMISSION_FAILED Error User USD{UserName} failed to grant permission for Role USD{RoleName} on USD{VdcObjectType} to User/Group USD{SubjectName}. 868 USER_REMOVE_SYSTEM_PERMISSION Info User/Group USD{SubjectName} Role USD{RoleName} permission was removed from USD{VdcObjectType} by USD{UserName} 869 USER_REMOVE_SYSTEM_PERMISSION_FAILED Error User USD{UserName} failed to remove permission for Role USD{RoleName} from USD{VdcObjectType} to User/Group USD{SubjectName} 870 USER_ADD_PROFILE Info Profile created for USD{UserName} 871 USER_ADD_PROFILE_FAILED Error Failed to create profile for USD{UserName} 872 USER_UPDATE_PROFILE Info Updated profile for USD{UserName} 873 USER_UPDATE_PROFILE_FAILED Error Failed to update profile for USD{UserName} 874 USER_REMOVE_PROFILE Info Removed profile for USD{UserName} 875 USER_REMOVE_PROFILE_FAILED Error Failed to remove profile for USD{UserName} 876 HOST_CERTIFICATION_IS_INVALID Error Host USD{VdsName} certification is invalid. The certification has no peer certificates. 877 HOST_CERTIFICATION_HAS_EXPIRED Info Host USD{VdsName} certification has expired at USD{ExpirationDate}. Please renew the host's certification. 878 ENGINE_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT Info Engine's certification is about to expire at USD{ExpirationDate}. Please renew the engine's certification. 879 HOST_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT Info Host USD{VdsName} certification is about to expire at USD{ExpirationDate}. Please renew the host's certification. 880 HOST_CERTIFICATION_ENROLLMENT_STARTED Normal Enrolling certificate for host USD{VdsName} was started (User: USD{UserName}). 881 HOST_CERTIFICATION_ENROLLMENT_FINISHED Normal Enrolling certificate for host USD{VdsName} was completed successfully (User: USD{UserName}). 882 HOST_CERTIFICATION_ENROLLMENT_FAILED Error Failed to enroll certificate for host USD{VdsName} (User: USD{UserName}). 883 ENGINE_CA_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT Info Engine's CA certification is about to expire at USD{ExpirationDate}. 884 HOST_AVAILABLE_UPDATES_STARTED Info Started to check for available updates on host USD{VdsName}. 885 HOST_AVAILABLE_UPDATES_FINISHED Info Check for available updates on host USD{VdsName} was completed successfully with message 'USD{Message}'. 886 HOST_AVAILABLE_UPDATES_PROCESS_IS_ALREADY_RUNNING Warning Failed to check for available updates on host USD{VdsName}: Another process is already running. 887 HOST_AVAILABLE_UPDATES_SKIPPED_UNSUPPORTED_STATUS Warning Failed to check for available updates on host USD{VdsName}: Unsupported host status. 890 HOST_UPGRADE_FINISHED_MANUAL_HA Warning Host USD{VdsName} upgrade was completed successfully, but the Hosted Engine HA service may still be in maintenance mode. If necessary, please correct this manually. 900 AD_COMPUTER_ACCOUNT_SUCCEEDED Info Account creation successful. 901 AD_COMPUTER_ACCOUNT_FAILED Error Account creation failed. 918 USER_FORCE_REMOVE_STORAGE_POOL Info Data Center USD{StoragePoolName} was forcibly removed by USD{UserName} 919 USER_FORCE_REMOVE_STORAGE_POOL_FAILED Error Failed to forcibly remove Data Center USD{StoragePoolName}. (User: USD{UserName}) 925 MAC_ADDRESS_IS_EXTERNAL Warning VM USD{VmName} has MAC address(es) USD{MACAddr}, which is/are out of its MAC pool definitions. 926 NETWORK_REMOVE_BOND Info Remove bond: USD{BondName} for Host: USD{VdsName} (User:USD{UserName}). 927 NETWORK_REMOVE_BOND_FAILED Error Failed to remove bond: USD{BondName} for Host: USD{VdsName} (User:USD{UserName}). 928 NETWORK_VDS_NETWORK_MATCH_CLUSTER Info Vds USD{VdsName} network match to cluster USD{ClusterName} 929 NETWORK_VDS_NETWORK_NOT_MATCH_CLUSTER Error Vds USD{VdsName} network does not match to cluster USD{ClusterName} 930 NETWORK_REMOVE_VM_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was removed from VM USD{VmName}. (User: USD{UserName}) 931 NETWORK_REMOVE_VM_INTERFACE_FAILED Error Failed to remove Interface USD{InterfaceName} (USD{InterfaceType}) from VM USD{VmName}. (User: USD{UserName}) 932 NETWORK_ADD_VM_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was added to VM USD{VmName}. (User: USD{UserName}) 933 NETWORK_ADD_VM_INTERFACE_FAILED Error Failed to add Interface USD{InterfaceName} (USD{InterfaceType}) to VM USD{VmName}. (User: USD{UserName}) 934 NETWORK_UPDATE_VM_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was updated for VM USD{VmName}. USD{LinkState} (User: USD{UserName}) 935 NETWORK_UPDATE_VM_INTERFACE_FAILED Error Failed to update Interface USD{InterfaceName} (USD{InterfaceType}) for VM USD{VmName}. (User: USD{UserName}) 936 NETWORK_ADD_TEMPLATE_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was added to Template USD{VmTemplateName}. (User: USD{UserName}) 937 NETWORK_ADD_TEMPLATE_INTERFACE_FAILED Error Failed to add Interface USD{InterfaceName} (USD{InterfaceType}) to Template USD{VmTemplateName}. (User: USD{UserName}) 938 NETWORK_REMOVE_TEMPLATE_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was removed from Template USD{VmTemplateName}. (User: USD{UserName}) 939 NETWORK_REMOVE_TEMPLATE_INTERFACE_FAILED Error Failed to remove Interface USD{InterfaceName} (USD{InterfaceType}) from Template USD{VmTemplateName}. (User: USD{UserName}) 940 NETWORK_UPDATE_TEMPLATE_INTERFACE Info Interface USD{InterfaceName} (USD{InterfaceType}) was updated for Template USD{VmTemplateName}. (User: USD{UserName}) 941 NETWORK_UPDATE_TEMPLATE_INTERFACE_FAILED Error Failed to update Interface USD{InterfaceName} (USD{InterfaceType}) for Template USD{VmTemplateName}. (User: USD{UserName}) 942 NETWORK_ADD_NETWORK Info Network USD{NetworkName} was added to Data Center: USD{StoragePoolName} 943 NETWORK_ADD_NETWORK_FAILED Error Failed to add Network USD{NetworkName} to Data Center: USD{StoragePoolName} 944 NETWORK_REMOVE_NETWORK Info Network USD{NetworkName} was removed from Data Center: USD{StoragePoolName} 945 NETWORK_REMOVE_NETWORK_FAILED Error Failed to remove Network USD{NetworkName} from Data Center: USD{StoragePoolName} 946 NETWORK_ATTACH_NETWORK_TO_CLUSTER Info Network USD{NetworkName} attached to Cluster USD{ClusterName} 947 NETWORK_ATTACH_NETWORK_TO_CLUSTER_FAILED Error Failed to attach Network USD{NetworkName} to Cluster USD{ClusterName} 948 NETWORK_DETACH_NETWORK_TO_CLUSTER Info Network USD{NetworkName} detached from Cluster USD{ClusterName} 949 NETWORK_DETACH_NETWORK_TO_CLUSTER_FAILED Error Failed to detach Network USD{NetworkName} from Cluster USD{ClusterName} 950 USER_ADD_STORAGE_POOL Info Data Center USD{StoragePoolName}, Compatibility Version USD{CompatibilityVersion} and Quota Type USD{QuotaEnforcementType} was added by USD{UserName} 951 USER_ADD_STORAGE_POOL_FAILED Error Failed to add Data Center USD{StoragePoolName}. (User: USD{UserName}) 952 USER_UPDATE_STORAGE_POOL Info Data Center USD{StoragePoolName} was updated by USD{UserName} 953 USER_UPDATE_STORAGE_POOL_FAILED Error Failed to update Data Center USD{StoragePoolName}. (User: USD{UserName}) 954 USER_REMOVE_STORAGE_POOL Info Data Center USD{StoragePoolName} was removed by USD{UserName} 955 USER_REMOVE_STORAGE_POOL_FAILED Error Failed to remove Data Center USD{StoragePoolName}. (User: USD{UserName}) 956 USER_ADD_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was added by USD{UserName} 957 USER_ADD_STORAGE_DOMAIN_FAILED Error Failed to add Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 958 USER_UPDATE_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was updated by USD{UserName} 959 USER_UPDATE_STORAGE_DOMAIN_FAILED Error Failed to update Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 960 USER_REMOVE_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was removed by USD{UserName} 961 USER_REMOVE_STORAGE_DOMAIN_FAILED Error Failed to remove Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 962 USER_ATTACH_STORAGE_DOMAIN_TO_POOL Info Storage Domain USD{StorageDomainName} was attached to Data Center USD{StoragePoolName} by USD{UserName} 963 USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED Error Failed to attach Storage Domain USD{StorageDomainName} to Data Center USD{StoragePoolName}. (User: USD{UserName}) 964 USER_DETACH_STORAGE_DOMAIN_FROM_POOL Info Storage Domain USD{StorageDomainName} was detached from Data Center USD{StoragePoolName} by USD{UserName} 965 USER_DETACH_STORAGE_DOMAIN_FROM_POOL_FAILED Error Failed to detach Storage Domain USD{StorageDomainName} from Data Center USD{StoragePoolName}. (User: USD{UserName}) 966 USER_ACTIVATED_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was activated by USD{UserName} 967 USER_ACTIVATE_STORAGE_DOMAIN_FAILED Error Failed to activate Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) by USD{UserName} 968 USER_DEACTIVATED_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was deactivated and has moved to 'Preparing for maintenance' until it will no longer be accessed by any Host of the Data Center. 969 USER_DEACTIVATE_STORAGE_DOMAIN_FAILED Error Failed to deactivate Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}). 970 SYSTEM_DEACTIVATED_STORAGE_DOMAIN Warning Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was deactivated by system because it's not visible by any of the hosts. 971 SYSTEM_DEACTIVATE_STORAGE_DOMAIN_FAILED Error Failed to deactivate Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}). 972 USER_EXTENDED_STORAGE_DOMAIN Info Storage USD{StorageDomainName} has been extended by USD{UserName}. Please wait for refresh. 973 USER_EXTENDED_STORAGE_DOMAIN_FAILED Error Failed to extend Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 974 USER_REMOVE_VG Info Volume group USD{VgId} was removed by USD{UserName}. 975 USER_REMOVE_VG_FAILED Error Failed to remove Volume group USD{VgId}. (User: UserName) 976 USER_ACTIVATE_STORAGE_POOL Info Data Center USD{StoragePoolName} was activated. (User: USD{UserName}) 977 USER_ACTIVATE_STORAGE_POOL_FAILED Error Failed to activate Data Center USD{StoragePoolName}. (User: USD{UserName}) 978 SYSTEM_FAILED_CHANGE_STORAGE_POOL_STATUS Error Failed to change Data Center USD{StoragePoolName} status. 979 SYSTEM_CHANGE_STORAGE_POOL_STATUS_NO_HOST_FOR_SPM Error Fencing failed on Storage Pool Manager USD{VdsName} for Data Center USD{StoragePoolName}. Setting status to Non-Operational. 980 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC Warning Invalid status on Data Center USD{StoragePoolName}. Setting status to Non Responsive. 981 USER_FORCE_REMOVE_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} was forcibly removed by USD{UserName} 982 USER_FORCE_REMOVE_STORAGE_DOMAIN_FAILED Error Failed to forcibly remove Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 983 RECONSTRUCT_MASTER_FAILED_NO_MASTER Warning No valid Data Storage Domains are available in Data Center USD{StoragePoolName} (please check your storage infrastructure). 984 RECONSTRUCT_MASTER_DONE Info Reconstruct Master Domain for Data Center USD{StoragePoolName} completed. 985 RECONSTRUCT_MASTER_FAILED Error Failed to Reconstruct Master Domain for Data Center USD{StoragePoolName}. 986 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_SEARCHING_NEW_SPM Warning Data Center is being initialized, please wait for initialization to complete. 987 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_WITH_ERROR Warning Invalid status on Data Center USD{StoragePoolName}. Setting Data Center status to Non Responsive (On host USD{VdsName}, Error: USD{Error}). 988 USER_CONNECT_HOSTS_TO_LUN_FAILED Error Failed to connect Host USD{VdsName} to device. (User: USD{UserName}) 989 SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_FROM_NON_OPERATIONAL Info Try to recover Data Center USD{StoragePoolName}. Setting status to Non Responsive. 990 SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC Warning Sync Error on Master Domain between Host USD{VdsName} and oVirt Engine. Domain: USD{StorageDomainName} is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue. 991 RECOVERY_STORAGE_POOL Info Data Center USD{StoragePoolName} was recovered by USD{UserName} 992 RECOVERY_STORAGE_POOL_FAILED Error Failed to recover Data Center USD{StoragePoolName} (User:USD{UserName}) 993 SYSTEM_CHANGE_STORAGE_POOL_STATUS_RESET_IRS Info Data Center USD{StoragePoolName} was reset. Setting status to Non Responsive (Elect new Storage Pool Manager). 994 CONNECT_STORAGE_SERVERS_FAILED Warning Failed to connect Host USD{VdsName} to Storage Servers 995 CONNECT_STORAGE_POOL_FAILED Warning Failed to connect Host USD{VdsName} to Storage Pool USD{StoragePoolName} 996 STORAGE_DOMAIN_ERROR Error The error message for connection USD{Connection} returned by VDSM was: USD{ErrorMessage} 997 REFRESH_REPOSITORY_IMAGE_LIST_FAILED Error Refresh image list failed for domain(s): USD{imageDomains}. Please check domain activity. 998 REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED Info Refresh image list succeeded for domain(s): USD{imageDomains} 999 STORAGE_ALERT_VG_METADATA_CRITICALLY_FULL Error The system has reached the 80% watermark on the VG metadata area size on USD{StorageDomainName}.\nThis is due to a high number of Vdisks or large Vdisks size allocated on this specific VG. 1000 STORAGE_ALERT_SMALL_VG_METADATA Warning The allocated VG metadata area size is smaller than 50MB on USD{StorageDomainName},\nwhich might limit its capacity (the number of Vdisks and/or their size). 1001 USER_RUN_VM_FAILURE_STATELESS_SNAPSHOT_LEFT Error Failed to start VM USD{VmName}, because exist snapshot for stateless state. Snapshot will be deleted. 1002 USER_ATTACH_STORAGE_DOMAINS_TO_POOL Info Storage Domains were attached to Data Center USD{StoragePoolName} by USD{UserName} 1003 USER_ATTACH_STORAGE_DOMAINS_TO_POOL_FAILED Error Failed to attach Storage Domains to Data Center USD{StoragePoolName}. (User: USD{UserName}) 1004 STORAGE_DOMAIN_TASKS_ERROR Warning Storage Domain USD{StorageDomainName} is down while there are tasks running on it. These tasks may fail. 1005 UPDATE_OVF_FOR_STORAGE_POOL_FAILED Warning Failed to update VMs/Templates OVF data in Data Center USD{StoragePoolName}. 1006 UPGRADE_STORAGE_POOL_ENCOUNTERED_PROBLEMS Warning Data Center USD{StoragePoolName} has encountered problems during upgrade process. 1007 REFRESH_REPOSITORY_IMAGE_LIST_INCOMPLETE Warning Refresh image list probably incomplete for domain USD{imageDomain}, only USD{imageListSize} images discovered. 1008 NUMBER_OF_LVS_ON_STORAGE_DOMAIN_EXCEEDED_THRESHOLD Warning The number of LVs on the domain USD{storageDomainName} exceeded USD{maxNumOfLVs}, you are approaching the limit where performance may degrade. 1009 USER_DEACTIVATE_STORAGE_DOMAIN_OVF_UPDATE_INCOMPLETE Warning Failed to deactivate Storage Domain USD{StorageDomainName} as the engine was restarted during the operation, please retry. (Data Center USD{StoragePoolName}). 1010 RELOAD_CONFIGURATIONS_SUCCESS Info System Configurations reloaded successfully. 1011 RELOAD_CONFIGURATIONS_FAILURE Error System Configurations failed to reload. 1012 NETWORK_ACTIVATE_VM_INTERFACE_SUCCESS Info Network Interface USD{InterfaceName} (USD{InterfaceType}) was plugged to VM USD{VmName}. (User: USD{UserName}) 1013 NETWORK_ACTIVATE_VM_INTERFACE_FAILURE Error Failed to plug Network Interface USD{InterfaceName} (USD{InterfaceType}) to VM USD{VmName}. (User: USD{UserName}) 1014 NETWORK_DEACTIVATE_VM_INTERFACE_SUCCESS Info Network Interface USD{InterfaceName} (USD{InterfaceType}) was unplugged from VM USD{VmName}. (User: USD{UserName}) 1015 NETWORK_DEACTIVATE_VM_INTERFACE_FAILURE Error Failed to unplug Network Interface USD{InterfaceName} (USD{InterfaceType}) from VM USD{VmName}. (User: USD{UserName}) 1016 UPDATE_FOR_OVF_STORES_FAILED Warning Failed to update OVF disks USD{DisksIds}, OVF data isn't updated on those OVF stores (Data Center USD{DataCenterName}, Storage Domain USD{StorageDomainName}). 1017 RETRIEVE_OVF_STORE_FAILED Warning Failed to retrieve VMs and Templates from the OVF disk of Storage Domain USD{StorageDomainName}. 1018 OVF_STORE_DOES_NOT_EXISTS Warning This Data center compatibility version does not support importing a data domain with its entities (VMs and Templates). The imported domain will be imported without them. 1019 UPDATE_DESCRIPTION_FOR_DISK_FAILED Error Failed to update the meta data description of disk USD{DiskName} (Data Center USD{DataCenterName}, Storage Domain USD{StorageDomainName}). 1020 UPDATE_DESCRIPTION_FOR_DISK_SKIPPED_SINCE_STORAGE_DOMAIN_NOT_ACTIVE Warning Not updating the metadata of Disk USD{DiskName} (Data Center USD{DataCenterName}. Since the Storage Domain USD{StorageDomainName} is not in active. 1022 USER_REFRESH_LUN_STORAGE_DOMAIN Info Resize LUNs operation succeeded. 1023 USER_REFRESH_LUN_STORAGE_DOMAIN_FAILED Error Failed to resize LUNs. 1024 USER_REFRESH_LUN_STORAGE_DIFFERENT_SIZE_DOMAIN_FAILED Error Failed to resize LUNs.\n Not all the hosts are seeing the same LUN size. 1025 VM_PAUSED Info VM USD{VmName} has been paused. 1026 FAILED_TO_STORE_ENTIRE_DISK_FIELD_IN_DISK_DESCRIPTION_METADATA Warning Failed to store field USD{DiskFieldName} as a part of USD{DiskAlias}'s description metadata due to storage space limitations. The field USD{DiskFieldName} will be truncated. 1027 FAILED_TO_STORE_ENTIRE_DISK_FIELD_AND_REST_OF_FIELDS_IN_DISK_DESCRIPTION_METADATA Warning Failed to store field USD{DiskFieldName} as a part of USD{DiskAlias}'s description metadata due to storage space limitations. The value will be truncated and the following fields will not be stored at all: USD{DiskFieldsNames}. 1028 FAILED_TO_STORE_DISK_FIELDS_IN_DISK_DESCRIPTION_METADATA Warning Failed to store the following fields in the description metadata of disk USD{DiskAlias} due to storage space limitations: USD{DiskFieldsNames}. 1029 STORAGE_DOMAIN_MOVED_TO_MAINTENANCE Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) successfully moved to Maintenance as it's no longer accessed by any Host of the Data Center. 1030 USER_DEACTIVATED_LAST_MASTER_STORAGE_DOMAIN Info Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}) was deactivated. 1031 TRANSFER_IMAGE_INITIATED Info Image USD{TransferType} with disk USD{DiskAlias} was initiated by USD{UserName}. 1032 TRANSFER_IMAGE_SUCCEEDED Info Image USD{TransferType} with disk USD{DiskAlias} succeeded. 1033 TRANSFER_IMAGE_CANCELLED Info Image USD{TransferType} with disk USD{DiskAlias} was cancelled. 1034 TRANSFER_IMAGE_FAILED Error Image USD{TransferType} with disk USD{DiskAlias} failed. 1035 TRANSFER_IMAGE_TEARDOWN_FAILED Info Failed to tear down image USD{DiskAlias} after image transfer session. 1036 USER_SCAN_STORAGE_DOMAIN_FOR_UNREGISTERED_DISKS Info Storage Domain USD{StorageDomainName} has finished to scan for unregistered disks by USD{UserName}. 1037 USER_SCAN_STORAGE_DOMAIN_FOR_UNREGISTERED_DISKS_FAILED Error Storage Domain USD{StorageDomainName} failed to scan for unregistered disks by USD{UserName}. 1039 LUNS_BROKE_SD_PASS_DISCARD_SUPPORT Warning Luns with IDs: [USD{LunsIds}] were updated in the DB but caused the storage domain USD{StorageDomainName} (ID USD{storageDomainId}) to stop supporting passing discard from the guest to the underlying storage. Please configure these luns' discard support in the underlying storage or disable 'Enable Discard' for vm disks on this storage domain. 1040 DISKS_WITH_ILLEGAL_PASS_DISCARD_EXIST Warning Disks with IDs: [USD{DisksIds}] have their 'Enable Discard' on even though the underlying storage does not support it. Please configure the underlying storage to support discard or disable 'Enable Discard' for these disks. 1041 USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN_FAILED Error Failed to remove USD{LunId} from Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 1042 USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN Info USD{LunId} was removed from Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 1043 USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN_STARTED Info Started to remove USD{LunId} from Storage Domain USD{StorageDomainName}. (User: USD{UserName}) 1044 ILLEGAL_STORAGE_DOMAIN_DISCARD_AFTER_DELETE Warning The storage domain with id USD{storageDomainId} has its 'Discard After Delete' enabled even though the underlying storage does not support discard. Therefore, disks and snapshots on this storage domain will not be discarded before they are removed. 1045 LUNS_BROKE_SD_DISCARD_AFTER_DELETE_SUPPORT Warning Luns with IDs: [USD{LunsIds}] were updated in the DB but caused the storage domain USD{StorageDomainName} (ID USD{storageDomainId}) to stop supporting discard after delete. Please configure these luns' discard support in the underlying storage or disable 'Discard After Delete' for this storage domain. 1046 STORAGE_DOMAINS_COULD_NOT_BE_SYNCED Info Storage domains with IDs [USD{StorageDomainsIds}] could not be synchronized. To synchronize them, please move them to maintenance and then activate. 1048 DIRECT_LUNS_COULD_NOT_BE_SYNCED Info Direct LUN disks with IDs [USD{DirectLunDisksIds}] could not be synchronized because there was no active host in the data center. Please synchronize them to get their latest information from the storage. 1052 OVF_STORES_UPDATE_IGNORED Normal OVFs update was ignored - nothing to update for storage domain 'USD{StorageDomainName}' 1060 UPLOAD_IMAGE_CLIENT_ERROR Error Unable to upload image to disk USD{DiskId} due to a client error. Make sure the selected file is readable. 1061 UPLOAD_IMAGE_XHR_TIMEOUT_ERROR Error Unable to upload image to disk USD{DiskId} due to a request timeout error. The upload bandwidth might be too slow. Please try to reduce the chunk size: 'engine-config -s UploadImageChunkSizeKB 1062 UPLOAD_IMAGE_NETWORK_ERROR Error Unable to upload image to disk USD{DiskId} due to a network error. Ensure that ovirt-imageio-proxy service is installed and configured and that ovirt-engine's CA certificate is registered as a trusted CA in the browser. The certificate can be fetched from USD{EngineUrl}/ovirt-engine/services/pki-resource?resource 1063 DOWNLOAD_IMAGE_NETWORK_ERROR Error Unable to download disk USD{DiskId} due to a network error. Make sure ovirt-imageio-proxy service is installed and configured, and ovirt-engine's certificate is registered as a valid CA in the browser. The certificate can be fetched from https://<engine_url>/ovirt-engine/services/pki-resource?resource 1064 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_TICKET_RENEW_FAILURE Error Transfer was stopped by system. Reason: failure in transfer image ticket renewal. 1065 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_MISSING_TICKET Error Transfer was stopped by system. Reason: missing transfer image ticket. 1067 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_MISSING_HOST Error Transfer was stopped by system. Reason: Could not find a suitable host for image data transfer. 1068 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_CREATE_TICKET Error Transfer was stopped by system. Reason: failed to create a signed image ticket. 1069 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_ADD_TICKET_TO_DAEMON Error Transfer was stopped by system. Reason: failed to add image ticket to ovirt-imageio-daemon. 1070 TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_ADD_TICKET_TO_PROXY Error Transfer was stopped by system. Reason: failed to add image ticket to ovirt-imageio-proxy. 1071 UPLOAD_IMAGE_PAUSED_BY_SYSTEM_TIMEOUT Error Upload was paused by system. Reason: timeout due to transfer inactivity. 1072 DOWNLOAD_IMAGE_CANCELED_TIMEOUT Error Download was canceled by system. Reason: timeout due to transfer inactivity. 1073 TRANSFER_IMAGE_PAUSED_BY_USER Normal Image transfer was paused by user (USD{UserName}). 1074 TRANSFER_IMAGE_RESUMED_BY_USER Normal Image transfer was resumed by user (USD{UserName}). 1098 NETWORK_UPDATE_DISPLAY_FOR_HOST_WITH_ACTIVE_VM Warning Display Network was updated on Host USD{VdsName} with active VMs attached. The change will be applied to those VMs after their reboot. Running VMs might loose display connectivity until then. 1099 NETWORK_UPDATE_DISPLAY_FOR_CLUSTER_WITH_ACTIVE_VM Warning Display Network (USD{NetworkName}) was updated for Cluster USD{ClusterName} with active VMs attached. The change will be applied to those VMs after their reboot. 1100 NETWORK_UPDATE_DISPLAY_TO_CLUSTER Info Update Display Network (USD{NetworkName}) for Cluster USD{ClusterName}. (User: USD{UserName}) 1101 NETWORK_UPDATE_DISPLAY_TO_CLUSTER_FAILED Error Failed to update Display Network (USD{NetworkName}) for Cluster USD{ClusterName}. (User: USD{UserName}) 1102 NETWORK_UPDATE_NETWORK_TO_VDS_INTERFACE Info Update Network USD{NetworkName} in Host USD{VdsName}. (User: USD{UserName}) 1103 NETWORK_UPDATE_NETWORK_TO_VDS_INTERFACE_FAILED Error Failed to update Network USD{NetworkName} in Host USD{VdsName}. (User: USD{UserName}) 1104 NETWORK_COMMINT_NETWORK_CHANGES Info Network changes were saved on host USD{VdsName} 1105 NETWORK_COMMINT_NETWORK_CHANGES_FAILED Error Failed to commit network changes on USD{VdsName} 1106 NETWORK_HOST_USING_WRONG_CLUSER_VLAN Warning USD{VdsName} is having wrong vlan id: USD{VlanIdHost}, expected vlan id: USD{VlanIdCluster} 1107 NETWORK_HOST_MISSING_CLUSER_VLAN Warning USD{VdsName} is missing vlan id: USD{VlanIdCluster} that is expected by the cluster 1108 VDS_NETWORK_MTU_DIFFER_FROM_LOGICAL_NETWORK Info 1109 BRIDGED_NETWORK_OVER_MULTIPLE_INTERFACES Warning Bridged network USD{NetworkName} is attached to multiple interfaces: USD{Interfaces} on Host USD{VdsName}. 1110 VDS_NETWORKS_OUT_OF_SYNC Warning Host USD{VdsName}'s following network(s) are not synchronized with their Logical Network configuration: USD{Networks}. 1111 VM_MIGRATION_FAILED_DURING_MOVE_TO_MAINTENANCE_NO_DESTINATION_VDS Error Migration failedUSD{DueToMigrationError} while Source Host is in 'preparing for maintenance' state.\n Consider manual intervention\: stopping/migrating Vms as Host's state will not\n turn to maintenance while VMs are still running on it.(VM: USD{VmName}, Source: USD{VdsName}). 1112 NETWORK_UPDTAE_NETWORK_ON_CLUSTER Info Network USD{NetworkName} on Cluster USD{ClusterName} updated. 1113 NETWORK_UPDTAE_NETWORK_ON_CLUSTER_FAILED Error Failed to update Network USD{NetworkName} on Cluster USD{ClusterName}. 1114 NETWORK_UPDATE_NETWORK Info Network USD{NetworkName} was updated on Data Center: USD{StoragePoolName} 1115 NETWORK_UPDATE_NETWORK_FAILED Error Failed to update Network USD{NetworkName} on Data Center: USD{StoragePoolName} 1116 NETWORK_UPDATE_VM_INTERFACE_LINK_UP Info Link State is UP. 1117 NETWORK_UPDATE_VM_INTERFACE_LINK_DOWN Info Link State is DOWN. 1118 INVALID_BOND_INTERFACE_FOR_MANAGEMENT_NETWORK_CONFIGURATION Error Failed to configure management network on host USD{VdsName}. Host USD{VdsName} has an invalid bond interface (USD{InterfaceName} contains less than 2 active slaves) for the management network configuration. 1119 VLAN_ID_MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION Error Failed to configure management network on host USD{VdsName}. Host USD{VdsName} has an interface USD{InterfaceName} for the management network configuration with VLAN-ID (USD{VlanId}), which is different from data-center definition (USD{MgmtVlanId}). 1120 SETUP_NETWORK_FAILED_FOR_MANAGEMENT_NETWORK_CONFIGURATION Error Failed to configure management network on host USD{VdsName} due to setup networks failure. 1121 PERSIST_NETWORK_FAILED_FOR_MANAGEMENT_NETWORK Warning Failed to configure management network on host USD{VdsName} due to failure in persisting the management network configuration. 1122 ADD_VNIC_PROFILE Info VM network interface profile USD{VnicProfileName} was added to network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1123 ADD_VNIC_PROFILE_FAILED Error Failed to add VM network interface profile USD{VnicProfileName} to network USD{NetworkName} in Data Center: USD{DataCenterName} (User: USD{UserName}) 1124 UPDATE_VNIC_PROFILE Info VM network interface profile USD{VnicProfileName} was updated for network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1125 UPDATE_VNIC_PROFILE_FAILED Error Failed to update VM network interface profile USD{VnicProfileName} for network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1126 REMOVE_VNIC_PROFILE Info VM network interface profile USD{VnicProfileName} was removed from network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1127 REMOVE_VNIC_PROFILE_FAILED Error Failed to remove VM network interface profile USD{VnicProfileName} from network USD{NetworkName} in Data Center: USD{DataCenterName}. (User: USD{UserName}) 1128 NETWORK_WITHOUT_INTERFACES Warning Network USD{NetworkName} is not attached to any interface on host USD{VdsName}. 1129 VNIC_PROFILE_UNSUPPORTED_FEATURES Warning VM USD{VmName} has network interface USD{NicName} which is using profile USD{VnicProfile} with unsupported feature(s) 'USD{UnsupportedFeatures}' by VM cluster USD{ClusterName} (version USD{CompatibilityVersion}). 1131 REMOVE_NETWORK_BY_LABEL_FAILED Error Network USD{Network} cannot be removed from the following hosts: USD{HostNames} in data-center USD{StoragePoolName}. 1132 LABEL_NETWORK Info Network USD{NetworkName} was labeled USD{Label} in data-center USD{StoragePoolName}. 1133 LABEL_NETWORK_FAILED Error Failed to label network USD{NetworkName} with label USD{Label} in data-center USD{StoragePoolName}. 1134 UNLABEL_NETWORK Info Network USD{NetworkName} was unlabeled in data-center USD{StoragePoolName}. 1135 UNLABEL_NETWORK_FAILED Error Failed to unlabel network USD{NetworkName} in data-center USD{StoragePoolName}. 1136 LABEL_NIC Info Network interface card USD{NicName} was labeled USD{Label} on host USD{VdsName}. 1137 LABEL_NIC_FAILED Error Failed to label network interface card USD{NicName} with label USD{Label} on host USD{VdsName}. 1138 UNLABEL_NIC Info Label USD{Label} was removed from network interface card USD{NicName} on host USD{VdsName}. 1139 UNLABEL_NIC_FAILED Error Failed to remove label USD{Label} from network interface card USD{NicName} on host USD{VdsName}. 1140 SUBNET_REMOVED Info Subnet USD{SubnetName} was removed from provider USD{ProviderName}. (User: USD{UserName}) 1141 SUBNET_REMOVAL_FAILED Error Failed to remove subnet USD{SubnetName} from provider USD{ProviderName}. (User: USD{UserName}) 1142 SUBNET_ADDED Info Subnet USD{SubnetName} was added on provider USD{ProviderName}. (User: USD{UserName}) 1143 SUBNET_ADDITION_FAILED Error Failed to add subnet USD{SubnetName} on provider USD{ProviderName}. (User: USD{UserName}) 1144 CONFIGURE_NETWORK_BY_LABELS_WHEN_CHANGING_CLUSTER_FAILED Error Failed to configure networks on host USD{VdsName} while changing its cluster. 1145 PERSIST_NETWORK_ON_HOST Info (USD{Sequence}/USD{Total}): Applying changes for network(s) USD{NetworkNames} on host USD{VdsName}. (User: USD{UserName}) 1146 PERSIST_NETWORK_ON_HOST_FINISHED Info (USD{Sequence}/USD{Total}): Successfully applied changes for network(s) USD{NetworkNames} on host USD{VdsName}. (User: USD{UserName}) 1147 PERSIST_NETWORK_ON_HOST_FAILED Error (USD{Sequence}/USD{Total}): Failed to apply changes for network(s) USD{NetworkNames} on host USD{VdsName}. (User: USD{UserName}) 1148 MULTI_UPDATE_NETWORK_NOT_POSSIBLE Warning Cannot apply network USD{NetworkName} changes to hosts on unsupported data center USD{StoragePoolName}. (User: USD{UserName}) 1149 REMOVE_PORT_FROM_EXTERNAL_PROVIDER_FAILED Warning Failed to remove vNIC USD{NicName} from external network provider USD{ProviderName}. The vNIC can be identified on the provider by device id USD{NicId}. 1150 IMPORTEXPORT_EXPORT_VM Info Vm USD{VmName} was exported successfully to USD{StorageDomainName} 1151 IMPORTEXPORT_EXPORT_VM_FAILED Error Failed to export Vm USD{VmName} to USD{StorageDomainName} 1152 IMPORTEXPORT_IMPORT_VM Info Vm USD{VmName} was imported successfully to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1153 IMPORTEXPORT_IMPORT_VM_FAILED Error Failed to import Vm USD{VmName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1154 IMPORTEXPORT_REMOVE_TEMPLATE Info Template USD{VmTemplateName} was removed from USD{StorageDomainName} 1155 IMPORTEXPORT_REMOVE_TEMPLATE_FAILED Error Failed to remove Template USD{VmTemplateName} from USD{StorageDomainName} 1156 IMPORTEXPORT_EXPORT_TEMPLATE Info Template USD{VmTemplateName} was exported successfully to USD{StorageDomainName} 1157 IMPORTEXPORT_EXPORT_TEMPLATE_FAILED Error Failed to export Template USD{VmTemplateName} to USD{StorageDomainName} 1158 IMPORTEXPORT_IMPORT_TEMPLATE Info Template USD{VmTemplateName} was imported successfully to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1159 IMPORTEXPORT_IMPORT_TEMPLATE_FAILED Error Failed to import Template USD{VmTemplateName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1160 IMPORTEXPORT_REMOVE_VM Info Vm USD{VmName} was removed from USD{StorageDomainName} 1161 IMPORTEXPORT_REMOVE_VM_FAILED Error Failed to remove Vm USD{VmName} remove from USD{StorageDomainName} 1162 IMPORTEXPORT_STARTING_EXPORT_VM Info Starting export Vm USD{VmName} to USD{StorageDomainName} 1163 IMPORTEXPORT_STARTING_IMPORT_TEMPLATE Info Starting to import Template USD{VmTemplateName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1164 IMPORTEXPORT_STARTING_EXPORT_TEMPLATE Info Starting to export Template USD{VmTemplateName} to USD{StorageDomainName} 1165 IMPORTEXPORT_STARTING_IMPORT_VM Info Starting to import Vm USD{VmName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName} 1166 IMPORTEXPORT_STARTING_REMOVE_TEMPLATE Info Starting to remove Template USD{VmTemplateName} remove USD{StorageDomainName} 1167 IMPORTEXPORT_STARTING_REMOVE_VM Info Starting to remove Vm USD{VmName} remove from USD{StorageDomainName} 1168 IMPORTEXPORT_FAILED_TO_IMPORT_VM Warning Failed to read VM 'USD{ImportedVmName}' OVF, it may be corrupted. Underlying error message: USD{ErrorMessage} 1169 IMPORTEXPORT_FAILED_TO_IMPORT_TEMPLATE Warning Failed to read Template 'USD{Template}' OVF, it may be corrupted. Underlying error message: USD{ErrorMessage} 1170 IMPORTEXPORT_IMPORT_TEMPLATE_INVALID_INTERFACES Normal While importing Template USD{EntityName}, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster. Network Name was not set in the Interface/s USD{Interfaces}. 1171 USER_ACCOUNT_PASSWORD_EXPIRED Error User USD{UserName} cannot login, as the user account password has expired. Please contact the system administrator. 1172 AUTH_FAILED_INVALID_CREDENTIALS Error User USD{UserName} cannot login, please verify the username and password. 1173 AUTH_FAILED_CLOCK_SKEW_TOO_GREAT Error User USD{UserName} cannot login, the engine clock is not synchronized with directory services. Please contact the system administrator. 1174 AUTH_FAILED_NO_KDCS_FOUND Error User USD{UserName} cannot login, authentication domain cannot be found. Please contact the system administrator. 1175 AUTH_FAILED_DNS_ERROR Error User USD{UserName} cannot login, there's an error in DNS configuration. Please contact the system administrator. 1176 AUTH_FAILED_OTHER Error User USD{UserName} cannot login, unknown kerberos error. Please contact the system administrator. 1177 AUTH_FAILED_DNS_COMMUNICATION_ERROR Error User USD{UserName} cannot login, cannot lookup DNS for SRV records. Please contact the system administrator. 1178 AUTH_FAILED_CONNECTION_TIMED_OUT Error User USD{UserName} cannot login, connection to LDAP server has timed out. Please contact the system administrator. 1179 AUTH_FAILED_WRONG_REALM Error User USD{UserName} cannot login, please verify your domain name. 1180 AUTH_FAILED_CONNECTION_ERROR Error User USD{UserName} cannot login, connection refused or some configuration problems exist. Possible DNS error. Please contact the system administrator. 1181 AUTH_FAILED_CANNOT_FIND_LDAP_SERVER_FOR_DOMAIN Error User USD{UserName} cannot login, cannot find valid LDAP server for domain. Please contact the system administrator. 1182 AUTH_FAILED_NO_USER_INFORMATION_WAS_FOUND Error User USD{UserName} cannot login, no user information was found. Please contact the system administrator. 1183 AUTH_FAILED_CLIENT_NOT_FOUND_IN_KERBEROS_DATABASE Error User USD{UserName} cannot login, user was not found in domain. Please contact the system administrator. 1184 AUTH_FAILED_INTERNAL_KERBEROS_ERROR Error User USD{UserName} cannot login, an internal error has ocurred in the Kerberos implementation of the JVM. Please contact the system administrator. 1185 USER_ACCOUNT_EXPIRED Error The account for USD{UserName} got expired. Please contact the system administrator. 1186 IMPORTEXPORT_NO_PROXY_HOST_AVAILABLE_IN_DC Error No Host in Data Center 'USD{StoragePoolName}' can serve as a proxy to retrieve remote VMs information (User: USD{UserName}). 1187 IMPORTEXPORT_HOST_CANNOT_SERVE_AS_PROXY Error Host USD{VdsName} cannot be used as a proxy to retrieve remote VMs information since it is not up (User: USD{UserName}). 1188 IMPORTEXPORT_PARTIAL_VM_MISSING_ENTITIES Warning The following entities could not be verified and will not be part of the imported VM USD{VmName}: 'USD{MissingEntities}' (User: USD{UserName}). 1189 IMPORTEXPORT_IMPORT_VM_FAILED_UPDATING_OVF Error Failed to import Vm USD{VmName} to Data Center USD{StoragePoolName}, Cluster USD{ClusterName}, could not update VM data in export. 1190 USER_RESTORE_FROM_SNAPSHOT_START Info Restoring VM USD{VmName} from snapshot started by user USD{UserName}. 1191 VM_DISK_ALREADY_CHANGED Info CD USD{DiskName} is already inserted to VM USD{VmName}, disk change action was skipped. User: USD{UserName}. 1192 VM_DISK_ALREADY_EJECTED Info CD is already ejected from VM USD{VmName}, disk change action was skipped. User: USD{UserName}. 1193 IMPORTEXPORT_STARTING_CONVERT_VM Info Starting to convert Vm USD{VmName} 1194 IMPORTEXPORT_CONVERT_FAILED Info Failed to convert Vm USD{VmName} 1195 IMPORTEXPORT_CANNOT_GET_OVF Info Failed to get the configuration of converted Vm USD{VmName} 1196 IMPORTEXPORT_INVALID_OVF Info Failed to process the configuration of converted Vm USD{VmName} 1197 IMPORTEXPORT_PARTIAL_TEMPLATE_MISSING_ENTITIES Warning The following entities could not be verified and will not be part of the imported Template USD{VmTemplateName}: 'USD{MissingEntities}' (User: USD{UserName}). 1200 ENTITY_RENAMED Info USD{EntityType} USD{OldEntityName} was renamed from USD{OldEntityName} to USD{NewEntityName} by USD{UserName}. 1201 UPDATE_HOST_NIC_VFS_CONFIG Info The VFs configuration of network interface card USD{NicName} on host USD{VdsName} was updated. 1202 UPDATE_HOST_NIC_VFS_CONFIG_FAILED Error Failed to update the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1203 ADD_VFS_CONFIG_NETWORK Info Network USD{NetworkName} was added to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1204 ADD_VFS_CONFIG_NETWORK_FAILED Info Failed to add USD{NetworkName} to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1205 REMOVE_VFS_CONFIG_NETWORK Info Network USD{NetworkName} was removed from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1206 REMOVE_VFS_CONFIG_NETWORK_FAILED Info Failed to remove USD{NetworkName} from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1207 ADD_VFS_CONFIG_LABEL Info Label USD{Label} was added to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1208 ADD_VFS_CONFIG_LABEL_FAILED Info Failed to add USD{Label} to the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1209 REMOVE_VFS_CONFIG_LABEL Info Label USD{Label} was removed from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1210 REMOVE_VFS_CONFIG_LABEL_FAILED Info Failed to remove USD{Label} from the VFs configuration of network interface card USD{NicName} on host USD{VdsName}. 1211 USER_REDUCE_DOMAIN_DEVICES_STARTED Info Started to reduce Storage USD{StorageDomainName} devices. (User: USD{UserName}). 1212 USER_REDUCE_DOMAIN_DEVICES_FAILED_METADATA_DEVICES Error Failed to reduce Storage USD{StorageDomainName}. The following devices contains the domain metadata USD{deviceIds} and can't be reduced from the domain. (User: USD{UserName}). 1213 USER_REDUCE_DOMAIN_DEVICES_FAILED Error Failed to reduce Storage USD{StorageDomainName}. (User: USD{UserName}). 1214 USER_REDUCE_DOMAIN_DEVICES_SUCCEEDED Info Storage USD{StorageDomainName} has been reduced. (User: USD{UserName}). 1215 USER_REDUCE_DOMAIN_DEVICES_FAILED_NO_FREE_SPACE Error Can't reduce Storage USD{StorageDomainName}. There is not enough space on the destination devices of the storage domain. (User: USD{UserName}). 1216 USER_REDUCE_DOMAIN_DEVICES_FAILED_TO_GET_DOMAIN_INFO Error Can't reduce Storage USD{StorageDomainName}. Failed to get the domain info. (User: USD{UserName}). 1217 CANNOT_IMPORT_VM_WITH_LEASE_COMPAT_VERSION Warning The VM USD{VmName} has a VM lease defined yet will be imported without it as the VM compatibility version does not support VM leases. 1218 CANNOT_IMPORT_VM_WITH_LEASE_STORAGE_DOMAIN Warning The VM USD{VmName} has a VM lease defined yet will be imported without it as the Storage Domain for the lease does not exist or is not active. 1219 FAILED_DETERMINE_STORAGE_DOMAIN_METADATA_DEVICES Error Failed to determine the metadata devices of Storage Domain USD{StorageDomainName}. 1220 HOT_PLUG_LEASE_FAILED Error Failed to hot plug lease to the VM USD{VmName}. The VM is running without a VM lease. 1221 HOT_UNPLUG_LEASE_FAILED Error Failed to hot unplug lease to the VM USD{VmName}. 1222 DETACH_DOMAIN_WITH_VMS_AND_TEMPLATES_LEASES Warning The deactivated domain USD{storageDomainName} contained leases for the following VMs/Templates: USD{entitiesNames}, a part of those VMs will not run and need manual removal of the VM leases. 1223 IMPORTEXPORT_STARTING_EXPORT_VM_TO_OVA Info Starting to export Vm USD{VmName} as a Virtual Appliance 1224 IMPORTEXPORT_EXPORT_VM_TO_OVA Info Vm USD{VmName} was exported successfully as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1225 IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED Error Failed to export Vm USD{VmName} as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1226 IMPORTEXPORT_STARTING_EXPORT_TEMPLATE_TO_OVA Info Starting to export Template USD{VmTemplateName} as a Virtual Appliance 1227 IMPORTEXPORT_EXPORT_TEMPLATE_TO_OVA Info Template USD{VmTemplateName} was exported successfully as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1228 IMPORTEXPORT_EXPORT_TEMPLATE_TO_OVA_FAILED Error Failed to export Template USD{VmTemplateName} as a Virtual Appliance to path USD{OvaPath} on Host USD{VdsName} 1300 NUMA_ADD_VM_NUMA_NODE_SUCCESS Info Add VM NUMA node successfully. 1301 NUMA_ADD_VM_NUMA_NODE_FAILED Error Add VM NUMA node failed. 1310 NUMA_UPDATE_VM_NUMA_NODE_SUCCESS Info Update VM NUMA node successfully. 1311 NUMA_UPDATE_VM_NUMA_NODE_FAILED Error Update VM NUMA node failed. 1320 NUMA_REMOVE_VM_NUMA_NODE_SUCCESS Info Remove VM NUMA node successfully. 1321 NUMA_REMOVE_VM_NUMA_NODE_FAILED Error Remove VM NUMA node failed. 1322 USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE Error Failed to create Template USD{VmTemplateName} or its disks from VM USD{VmName}. 1323 USER_ADD_VM_TEMPLATE_ASSIGN_ILLEGAL_FAILURE Error Failed preparing Template USD{VmTemplateName} for sealing (VM: USD{VmName}). 1324 USER_ADD_VM_TEMPLATE_SEAL_FAILURE Error Failed to seal Template USD{VmTemplateName} (VM: USD{VmName}). 1325 USER_SPARSIFY_IMAGE_START Info Started to sparsify USD{DiskAlias} 1326 USER_SPARSIFY_IMAGE_FINISH_SUCCESS Info USD{DiskAlias} sparsified successfully. 1327 USER_SPARSIFY_IMAGE_FINISH_FAILURE Error Failed to sparsify USD{DiskAlias}. 1328 USER_AMEND_IMAGE_START Info Started to amend USD{DiskAlias} 1329 USER_AMEND_IMAGE_FINISH_SUCCESS Info USD{DiskAlias} has been amended successfully. 1330 USER_AMEND_IMAGE_FINISH_FAILURE Error Failed to amend USD{DiskAlias}. 1340 VM_DOES_NOT_FIT_TO_SINGLE_NUMA_NODE Warning VM USD{VmName} does not fit to a single NUMA node on host USD{HostName}. This may negatively impact its performance. Consider using vNUMA and NUMA pinning for this VM. 1400 ENTITY_RENAMED_INTERNALLY Info USD{EntityType} USD{OldEntityName} was renamed from USD{OldEntityName} to USD{NewEntityName}. 1402 USER_LOGIN_ON_BEHALF_FAILED Error Failed to execute login on behalf - USD{LoginOnBehalfLogInfo}. 1403 IRS_CONFIRMED_DISK_SPACE_LOW Warning Warning, low confirmed disk space. USD{StorageDomainName} domain has USD{DiskSpace} GB of confirmed free space. 2000 USER_HOTPLUG_DISK Info VM USD{VmName} disk USD{DiskAlias} was plugged by USD{UserName}. 2001 USER_FAILED_HOTPLUG_DISK Error Failed to plug disk USD{DiskAlias} to VM USD{VmName} (User: USD{UserName}). 2002 USER_HOTUNPLUG_DISK Info VM USD{VmName} disk USD{DiskAlias} was unplugged by USD{UserName}. 2003 USER_FAILED_HOTUNPLUG_DISK Error Failed to unplug disk USD{DiskAlias} from VM USD{VmName} (User: USD{UserName}). 2004 USER_COPIED_DISK Info User USD{UserName} is copying disk USD{DiskAlias} to domain USD{StorageDomainName}. 2005 USER_FAILED_COPY_DISK Error User USD{UserName} failed to copy disk USD{DiskAlias} to domain USD{StorageDomainName}. 2006 USER_COPIED_DISK_FINISHED_SUCCESS Info User USD{UserName} finished copying disk USD{DiskAlias} to domain USD{StorageDomainName}. 2007 USER_COPIED_DISK_FINISHED_FAILURE Error User USD{UserName} finished with error copying disk USD{DiskAlias} to domain USD{StorageDomainName}. 2008 USER_MOVED_DISK Info User USD{UserName} moving disk USD{DiskAlias} to domain USD{StorageDomainName}. 2009 USER_FAILED_MOVED_VM_DISK Error User USD{UserName} failed to move disk USD{DiskAlias} to domain USD{StorageDomainName}. 2010 USER_MOVED_DISK_FINISHED_SUCCESS Info User USD{UserName} finished moving disk USD{DiskAlias} to domain USD{StorageDomainName}. 2011 USER_MOVED_DISK_FINISHED_FAILURE Error User USD{UserName} have failed to move disk USD{DiskAlias} to domain USD{StorageDomainName}. 2012 USER_FINISHED_REMOVE_DISK_NO_DOMAIN Info Disk USD{DiskAlias} was successfully removed (User USD{UserName}). 2013 USER_FINISHED_FAILED_REMOVE_DISK_NO_DOMAIN Warning Failed to remove disk USD{DiskAlias} (User USD{UserName}). 2014 USER_FINISHED_REMOVE_DISK Info Disk USD{DiskAlias} was successfully removed from domain USD{StorageDomainName} (User USD{UserName}). 2015 USER_FINISHED_FAILED_REMOVE_DISK Warning Failed to remove disk USD{DiskAlias} from storage domain USD{StorageDomainName} (User: USD{UserName}). 2016 USER_ATTACH_DISK_TO_VM Info Disk USD{DiskAlias} was successfully attached to VM USD{VmName} by USD{UserName}. 2017 USER_FAILED_ATTACH_DISK_TO_VM Error Failed to attach Disk USD{DiskAlias} to VM USD{VmName} (User: USD{UserName}). 2018 USER_DETACH_DISK_FROM_VM Info Disk USD{DiskAlias} was successfully detached from VM USD{VmName} by USD{UserName}. 2019 USER_FAILED_DETACH_DISK_FROM_VM Error Failed to detach Disk USD{DiskAlias} from VM USD{VmName} (User: USD{UserName}). 2020 USER_ADD_DISK Info Add-Disk operation of 'USD{DiskAlias}' was initiated by USD{UserName}. 2021 USER_ADD_DISK_FINISHED_SUCCESS Info The disk 'USD{DiskAlias}' was successfully added. 2022 USER_ADD_DISK_FINISHED_FAILURE Error Add-Disk operation failed to complete. 2023 USER_FAILED_ADD_DISK Error Add-Disk operation failed (User: USD{UserName}). 2024 USER_RUN_UNLOCK_ENTITY_SCRIPT Info 2025 USER_MOVE_IMAGE_GROUP_FAILED_TO_DELETE_SRC_IMAGE Warning Possible failure while deleting USD{DiskAlias} from the source Storage Domain USD{StorageDomainName} during the move operation. The Storage Domain may be manually cleaned-up from possible leftovers (User:USD{UserName}). 2026 USER_MOVE_IMAGE_GROUP_FAILED_TO_DELETE_DST_IMAGE Warning Possible failure while clearing possible leftovers of USD{DiskAlias} from the target Storage Domain USD{StorageDomainName} after the move operation failed to copy the image to it properly. The Storage Domain may be manually cleaned-up from possible leftovers (User:USD{UserName}). 2027 USER_IMPORT_IMAGE Info User USD{UserName} importing image USD{RepoImageName} to domain USD{StorageDomainName}. 2028 USER_IMPORT_IMAGE_FINISHED_SUCCESS Info User USD{UserName} successfully imported image USD{RepoImageName} to domain USD{StorageDomainName}. 2029 USER_IMPORT_IMAGE_FINISHED_FAILURE Error User USD{UserName} failed to import image USD{RepoImageName} to domain USD{StorageDomainName}. 2030 USER_EXPORT_IMAGE Info User USD{UserName} exporting image USD{RepoImageName} to domain USD{DestinationStorageDomainName}. 2031 USER_EXPORT_IMAGE_FINISHED_SUCCESS Info User USD{UserName} successfully exported image USD{RepoImageName} to domain USD{DestinationStorageDomainName}. 2032 USER_EXPORT_IMAGE_FINISHED_FAILURE Error User USD{UserName} failed to export image USD{RepoImageName} to domain USD{DestinationStorageDomainName}. 2033 HOT_SET_NUMBER_OF_CPUS Info Hotplug CPU: changed the number of CPUs on VM USD{vmName} from USD{previousNumberOfCpus} to USD{numberOfCpus} 2034 FAILED_HOT_SET_NUMBER_OF_CPUS Error Failed to hot set number of CPUS to VM USD{vmName}. Underlying error message: USD{ErrorMessage} 2035 USER_ISCSI_BOND_HOST_RESTART_WARNING Warning The following Networks has been removed from the iSCSI bond USD{IscsiBondName}: USD{NetworkNames}. for those changes to take affect, the hosts must be moved to maintenance and activated again. 2036 ADD_DISK_INTERNAL Info Add-Disk operation of 'USD{DiskAlias}' was initiated by the system. 2037 ADD_DISK_INTERNAL_FAILURE Info Add-Disk operation of 'USD{DiskAlias}' failed to complete. 2038 USER_REMOVE_DISK_INITIATED Info Removal of Disk USD{DiskAlias} from domain USD{StorageDomainName} was initiated by USD{UserName}. 2039 HOT_SET_MEMORY Info Hotset memory: changed the amount of memory on VM USD{vmName} from USD{previousMem} to USD{newMem} 2040 FAILED_HOT_SET_MEMORY Error Failed to hot set memory to VM USD{vmName}. Underlying error message: USD{ErrorMessage} 2041 DISK_PREALLOCATION_FAILED Error 2042 USER_FINISHED_REMOVE_DISK_ATTACHED_TO_VMS Info Disk USD{DiskAlias} associated to the VMs USD{VmNames} was successfully removed from domain USD{StorageDomainName} (User USD{UserName}). 2043 USER_FINISHED_REMOVE_DISK_ATTACHED_TO_VMS_NO_DOMAIN Info Disk USD{DiskAlias} associated to the VMs USD{VmNames} was successfully removed (User USD{UserName}). 2044 USER_REMOVE_DISK_ATTACHED_TO_VMS_INITIATED Info Removal of Disk USD{DiskAlias} associated to the VMs USD{VmNames} from domain USD{StorageDomainName} was initiated by USD{UserName}. 2045 USER_COPY_IMAGE_GROUP_FAILED_TO_DELETE_DST_IMAGE Warning Possible failure while clearing possible leftovers of USD{DiskAlias} from the target Storage Domain USD{StorageDomainName} after the operation failed. The Storage Domain may be manually cleaned-up from possible leftovers (User:USD{UserName}). 2046 MEMORY_HOT_UNPLUG_SUCCESSFULLY_REQUESTED Info Hot unplug of memory device (USD{deviceId}) of size USD{memoryDeviceSizeMb}MB was successfully requested on VM 'USD{vmName}'. Physical memory guaranteed updated from USD{oldMinMemoryMb}MB to USD{newMinMemoryMb}MB}. 2047 MEMORY_HOT_UNPLUG_FAILED Error Failed to hot unplug memory device (USD{deviceId}) of size USD{memoryDeviceSizeMb}MiB out of VM 'USD{vmName}': USD{errorMessage} 2048 FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE Error Failed to hot plug memory to VM USD{vmName}. Amount of added memory (USD{memoryAdded}MiB) is not dividable by USD{requiredFactor}MiB. 2049 MEMORY_HOT_UNPLUG_SUCCESSFULLY_REQUESTED_PLUS_MEMORY_INFO Info Hot unplug of memory device (USD{deviceId}) of size USD{memoryDeviceSizeMb}MiB was successfully requested on VM 'USD{vmName}'. Defined Memory updated from USD{oldMemoryMb}MiB to USD{newMemoryMb}MiB. Physical memory guaranteed updated from USD{oldMinMemoryMb}MiB to USD{newMinMemoryMb}MiB. 2050 NO_MEMORY_DEVICE_TO_HOT_UNPLUG Info Defined memory can't be decreased. There are no hot plugged memory devices on VM USD{vmName}. 2051 NO_SUITABLE_MEMORY_DEVICE_TO_HOT_UNPLUG Info There is no memory device to hot unplug to satisfy request to decrement memory from USD{oldMemoryMb}MiB to USD{newMemoryMB}MiB on VM USD{vmName}. Available memory devices (decremented memory sizes): USD{memoryHotUnplugOptions}. 3000 USER_ADD_QUOTA Info Quota USD{QuotaName} has been added by USD{UserName}. 3001 USER_FAILED_ADD_QUOTA Error Failed to add Quota USD{QuotaName}. The operation was initiated by USD{UserName}. 3002 USER_UPDATE_QUOTA Info Quota USD{QuotaName} has been updated by USD{UserName}. 3003 USER_FAILED_UPDATE_QUOTA Error Failed to update Quota USD{QuotaName}. The operation was initiated by USD{UserName}.. 3004 USER_DELETE_QUOTA Info Quota USD{QuotaName} has been deleted by USD{UserName}. 3005 USER_FAILED_DELETE_QUOTA Error Failed to delete Quota USD{QuotaName}. The operation was initiated by USD{UserName}.. 3006 USER_EXCEEDED_QUOTA_CLUSTER_GRACE_LIMIT Error Cluster-Quota USD{QuotaName} limit exceeded and operation was blocked. Utilization: USD{Utilization}, Requested: USD{Requested} - Please select a different quota or contact your administrator to extend the quota. 3007 USER_EXCEEDED_QUOTA_CLUSTER_LIMIT Warning Cluster-Quota USD{QuotaName} limit exceeded and entered the grace zone. Utilization: USD{Utilization} (It is advised to select a different quota or contact your administrator to extend the quota). 3008 USER_EXCEEDED_QUOTA_CLUSTER_THRESHOLD Warning Cluster-Quota USD{QuotaName} is about to exceed. Utilization: USD{Utilization} 3009 USER_EXCEEDED_QUOTA_STORAGE_GRACE_LIMIT Error Storage-Quota USD{QuotaName} limit exceeded and operation was blocked. Utilization(used/requested): USD{CurrentStorage}%/USD{Requested}% - Please select a different quota or contact your administrator to extend the quota. 3010 USER_EXCEEDED_QUOTA_STORAGE_LIMIT Warning Storage-Quota USD{QuotaName} limit exceeded and entered the grace zone. Utilization: USD{CurrentStorage}% (It is advised to select a different quota or contact your administrator to extend the quota). 3011 USER_EXCEEDED_QUOTA_STORAGE_THRESHOLD Warning Storage-Quota USD{QuotaName} is about to exceed. Utilization: USD{CurrentStorage}% 3012 QUOTA_STORAGE_RESIZE_LOWER_THEN_CONSUMPTION Warning Storage-Quota USD{QuotaName}: the new size set for this quota is less than current disk utilization. 3013 MISSING_QUOTA_STORAGE_PARAMETERS_PERMISSIVE_MODE Warning Missing Quota for Disk, proceeding since in Permissive (Audit) mode. 3014 MISSING_QUOTA_CLUSTER_PARAMETERS_PERMISSIVE_MODE Warning Missing Quota for VM USD{VmName}, proceeding since in Permissive (Audit) mode. 3015 USER_EXCEEDED_QUOTA_CLUSTER_GRACE_LIMIT_PERMISSIVE_MODE Warning Cluster-Quota USD{QuotaName} limit exceeded, proceeding since in Permissive (Audit) mode. Utilization: USD{Utilization}, Requested: USD{Requested} - Please select a different quota or contact your administrator to extend the quota. 3016 USER_EXCEEDED_QUOTA_STORAGE_GRACE_LIMIT_PERMISSIVE_MODE Warning Storage-Quota USD{QuotaName} limit exceeded, proceeding since in Permissive (Audit) mode. Utilization(used/requested): USD{CurrentStorage}%/USD{Requested}% - Please select a different quota or contact your administrator to extend the quota. 3017 USER_IMPORT_IMAGE_AS_TEMPLATE Info User USD{UserName} importing image USD{RepoImageName} as template USD{TemplateName} to domain USD{StorageDomainName}. 3018 USER_IMPORT_IMAGE_AS_TEMPLATE_FINISHED_SUCCESS Info User USD{UserName} successfully imported image USD{RepoImageName} as template USD{TemplateName} to domain USD{StorageDomainName}. 3019 USER_IMPORT_IMAGE_AS_TEMPLATE_FINISHED_FAILURE Error User USD{UserName} failed to import image USD{RepoImageName} as template USD{TemplateName} to domain USD{StorageDomainName}. 4000 GLUSTER_VOLUME_CREATE Info Gluster Volume USD{glusterVolumeName} created on cluster USD{clusterName}. 4001 GLUSTER_VOLUME_CREATE_FAILED Error Creation of Gluster Volume USD{glusterVolumeName} failed on cluster USD{clusterName}. 4002 GLUSTER_VOLUME_OPTION_ADDED Info Volume Option USD{Key} 4003 GLUSTER_VOLUME_OPTION_SET_FAILED Error Volume Option USD{Key} 4004 GLUSTER_VOLUME_START Info Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName} started. 4005 GLUSTER_VOLUME_START_FAILED Error Could not start Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4006 GLUSTER_VOLUME_STOP Info Gluster Volume USD{glusterVolumeName} stopped on cluster USD{clusterName}. 4007 GLUSTER_VOLUME_STOP_FAILED Error Could not stop Gluster Volume USD{glusterVolumeName} on cluster USD{clusterName}. 4008 GLUSTER_VOLUME_OPTIONS_RESET Info Volume Option USD{Key} 4009 GLUSTER_VOLUME_OPTIONS_RESET_FAILED Error Could not reset Gluster Volume USD{glusterVolumeName} Options on cluster USD{clusterName}. 4010 GLUSTER_VOLUME_DELETE Info Gluster Volume USD{glusterVolumeName} deleted on cluster USD{clusterName}. 4011 GLUSTER_VOLUME_DELETE_FAILED Error Could not delete Gluster Volume USD{glusterVolumeName} on cluster USD{clusterName}. 4012 GLUSTER_VOLUME_REBALANCE_START Info Gluster Volume USD{glusterVolumeName} rebalance started on cluster USD{clusterName}. 4013 GLUSTER_VOLUME_REBALANCE_START_FAILED Error Could not start Gluster Volume USD{glusterVolumeName} rebalance on cluster USD{clusterName}. 4014 GLUSTER_VOLUME_REMOVE_BRICKS Info Bricks removed from Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4015 GLUSTER_VOLUME_REMOVE_BRICKS_FAILED Error Could not remove bricks from Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4016 GLUSTER_VOLUME_REPLACE_BRICK_FAILED Error Replace Gluster Volume USD{glusterVolumeName} Brick failed on cluster USD{clusterName} 4017 GLUSTER_VOLUME_REPLACE_BRICK_START Info Gluster Volume USD{glusterVolumeName} Replace Brick started on cluster USD{clusterName}. 4018 GLUSTER_VOLUME_REPLACE_BRICK_START_FAILED Error Could not start Gluster Volume USD{glusterVolumeName} Replace Brick on cluster USD{clusterName}. 4019 GLUSTER_VOLUME_ADD_BRICK Info USD{NoOfBricks} brick(s) added to volume USD{glusterVolumeName} of cluster USD{clusterName}. 4020 GLUSTER_VOLUME_ADD_BRICK_FAILED Error Failed to add bricks to the Gluster Volume USD{glusterVolumeName} of cluster USD{clusterName}. 4021 GLUSTER_SERVER_REMOVE_FAILED Error Failed to remove host USD{VdsName} from Cluster USD{ClusterName}. 4022 GLUSTER_VOLUME_PROFILE_START Info Gluster Volume USD{glusterVolumeName} profiling started on cluster USD{clusterName}. 4023 GLUSTER_VOLUME_PROFILE_START_FAILED Error Could not start profiling on gluster volume USD{glusterVolumeName} of cluster USD{clusterName} 4024 GLUSTER_VOLUME_PROFILE_STOP Info Gluster Volume USD{glusterVolumeName} profiling stopped on cluster USD{clusterName}. 4025 GLUSTER_VOLUME_PROFILE_STOP_FAILED Error Could not stop Profiling on gluster volume USD{glusterVolumeName} of cluster USD{clusterName}. 4026 GLUSTER_VOLUME_CREATED_FROM_CLI Warning Detected new volume USD{glusterVolumeName} on cluster USD{ClusterName}, and added it to engine DB. 4027 GLUSTER_VOLUME_DELETED_FROM_CLI Info Detected deletion of volume USD{glusterVolumeName} on cluster USD{ClusterName}, and deleted it from engine DB. 4028 GLUSTER_VOLUME_OPTION_SET_FROM_CLI Warning Detected new option USD{key} 4029 GLUSTER_VOLUME_OPTION_RESET_FROM_CLI Warning Detected option USD{key} 4030 GLUSTER_VOLUME_PROPERTIES_CHANGED_FROM_CLI Warning Detected changes in properties of volume USD{glusterVolumeName} of cluster USD{ClusterName}, and updated the same in engine DB. 4031 GLUSTER_VOLUME_BRICK_ADDED_FROM_CLI Warning Detected new brick USD{brick} on volume USD{glusterVolumeName} of cluster USD{ClusterName}, and added it to engine DB. 4032 GLUSTER_VOLUME_BRICK_REMOVED_FROM_CLI Info Detected brick USD{brick} removed from Volume USD{glusterVolumeName} of cluster USD{ClusterName}, and removed it from engine DB. 4033 GLUSTER_SERVER_REMOVED_FROM_CLI Info Detected server USD{VdsName} removed from Cluster USD{ClusterName}, and removed it from engine DB. 4034 GLUSTER_VOLUME_INFO_FAILED Error Failed to fetch gluster volume list from server USD{VdsName}. 4035 GLUSTER_COMMAND_FAILED Error Gluster command [USD{Command}] failed on server USD{Server}. 4038 GLUSTER_SERVER_REMOVE Info Host USD{VdsName} removed from Cluster USD{ClusterName}. 4039 GLUSTER_VOLUME_STARTED_FROM_CLI Warning Detected that Volume USD{glusterVolumeName} of Cluster USD{ClusterName} was started, and updated engine DB with it's new status. 4040 GLUSTER_VOLUME_STOPPED_FROM_CLI Warning Detected that Volume USD{glusterVolumeName} of Cluster USD{ClusterName} was stopped, and updated engine DB with it's new status. 4041 GLUSTER_VOLUME_OPTION_CHANGED_FROM_CLI Info Detected change in value of option USD{key} from USD{oldValue} to USD{newValue} on volume USD{glusterVolumeName} of cluster USD{ClusterName}, and updated it to engine DB. 4042 GLUSTER_HOOK_ENABLE Info Gluster Hook USD{GlusterHookName} enabled on cluster USD{ClusterName}. 4043 GLUSTER_HOOK_ENABLE_FAILED Error Failed to enable Gluster Hook USD{GlusterHookName} on cluster USD{ClusterName}. USD{FailureMessage} 4044 GLUSTER_HOOK_ENABLE_PARTIAL Warning Gluster Hook USD{GlusterHookName} enabled on some of the servers on cluster USD{ClusterName}. USD{FailureMessage} 4045 GLUSTER_HOOK_DISABLE Info Gluster Hook USD{GlusterHookName} disabled on cluster USD{ClusterName}. 4046 GLUSTER_HOOK_DISABLE_FAILED Error Failed to disable Gluster Hook USD{GlusterHookName} on cluster USD{ClusterName}. USD{FailureMessage} 4047 GLUSTER_HOOK_DISABLE_PARTIAL Warning Gluster Hook USD{GlusterHookName} disabled on some of the servers on cluster USD{ClusterName}. USD{FailureMessage} 4048 GLUSTER_HOOK_LIST_FAILED Error Failed to retrieve hook list from USD{VdsName} of Cluster USD{ClusterName}. 4049 GLUSTER_HOOK_CONFLICT_DETECTED Warning Detected conflict in hook USD{HookName} of Cluster USD{ClusterName}. 4050 GLUSTER_HOOK_DETECTED_NEW Info Detected new hook USD{HookName} in Cluster USD{ClusterName}. 4051 GLUSTER_HOOK_DETECTED_DELETE Info Detected removal of hook USD{HookName} in Cluster USD{ClusterName}. 4052 GLUSTER_VOLUME_OPTION_MODIFIED Info Volume Option USD{Key} changed to USD{Value} from USD{oldvalue} on USD{glusterVolumeName} of cluster USD{clusterName}. 4053 GLUSTER_HOOK_GETCONTENT_FAILED Error Failed to read content of hook USD{HookName} in Cluster USD{ClusterName}. 4054 GLUSTER_SERVICES_LIST_FAILED Error Could not fetch statuses of services from server USD{VdsName}. Updating statuses of all services on this server to UNKNOWN. 4055 GLUSTER_SERVICE_TYPE_ADDED_TO_CLUSTER Info Service type USD{ServiceType} was not mapped to cluster USD{ClusterName}. Mapped it now. 4056 GLUSTER_CLUSTER_SERVICE_STATUS_CHANGED Info Status of service type USD{ServiceType} changed from USD{OldStatus} to USD{NewStatus} on cluster USD{ClusterName} 4057 GLUSTER_SERVICE_ADDED_TO_SERVER Info Service USD{ServiceName} was not mapped to server USD{VdsName}. Mapped it now. 4058 GLUSTER_SERVER_SERVICE_STATUS_CHANGED Info Status of service USD{ServiceName} on server USD{VdsName} changed from USD{OldStatus} to USD{NewStatus}. Updating in engine now. 4059 GLUSTER_HOOK_UPDATED Info Gluster Hook USD{GlusterHookName} updated on conflicting servers. 4060 GLUSTER_HOOK_UPDATE_FAILED Error Failed to update Gluster Hook USD{GlusterHookName} on conflicting servers. USD{FailureMessage} 4061 GLUSTER_HOOK_ADDED Info Gluster Hook USD{GlusterHookName} added on conflicting servers. 4062 GLUSTER_HOOK_ADD_FAILED Error Failed to add Gluster Hook USD{GlusterHookName} on conflicting servers. USD{FailureMessage} 4063 GLUSTER_HOOK_REMOVED Info Gluster Hook USD{GlusterHookName} removed from all servers in cluster USD{ClusterName}. 4064 GLUSTER_HOOK_REMOVE_FAILED Error Failed to remove Gluster Hook USD{GlusterHookName} from cluster USD{ClusterName}. USD{FailureMessage} 4065 GLUSTER_HOOK_REFRESH Info Refreshed gluster hooks in Cluster USD{ClusterName}. 4066 GLUSTER_HOOK_REFRESH_FAILED Error Failed to refresh gluster hooks in Cluster USD{ClusterName}. 4067 GLUSTER_SERVICE_STARTED Info USD{servicetype} service started on host USD{VdsName} of cluster USD{ClusterName}. 4068 GLUSTER_SERVICE_START_FAILED Error Could not start USD{servicetype} service on host USD{VdsName} of cluster USD{ClusterName}. 4069 GLUSTER_SERVICE_STOPPED Info USD{servicetype} services stopped on host USD{VdsName} of cluster USD{ClusterName}. 4070 GLUSTER_SERVICE_STOP_FAILED Error Could not stop USD{servicetype} service on host USD{VdsName} of cluster USD{ClusterName}. 4071 GLUSTER_SERVICES_LIST_NOT_FETCHED Info Could not fetch list of services from USD{ServiceGroupType} named USD{ServiceGroupName}. 4072 GLUSTER_SERVICE_RESTARTED Info USD{servicetype} service re-started on host USD{VdsName} of cluster USD{ClusterName}. 4073 GLUSTER_SERVICE_RESTART_FAILED Error Could not re-start USD{servicetype} service on host USD{VdsName} of cluster USD{ClusterName}. 4074 GLUSTER_VOLUME_OPTIONS_RESET_ALL Info All Volume Options reset on USD{glusterVolumeName} of cluster USD{clusterName}. 4075 GLUSTER_HOST_UUID_NOT_FOUND Error Could not find gluster uuid of server USD{VdsName} on Cluster USD{ClusterName}. 4076 GLUSTER_VOLUME_BRICK_ADDED Info Brick [USD{brickpath}] on host [USD{servername}] added to volume [USD{glusterVolumeName}] of cluster USD{clusterName} 4077 GLUSTER_CLUSTER_SERVICE_STATUS_ADDED Info Status of service type USD{ServiceType} set to USD{NewStatus} on cluster USD{ClusterName} 4078 GLUSTER_VOLUME_REBALANCE_STOP Info Gluster Volume USD{glusterVolumeName} rebalance stopped of cluster USD{clusterName}. 4079 GLUSTER_VOLUME_REBALANCE_STOP_FAILED Error Could not stop rebalance of gluster volume USD{glusterVolumeName} of cluster USD{clusterName}. 4080 START_REMOVING_GLUSTER_VOLUME_BRICKS Info Started removing bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4081 START_REMOVING_GLUSTER_VOLUME_BRICKS_FAILED Error Could not start remove bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4082 GLUSTER_VOLUME_REMOVE_BRICKS_STOP Info Stopped removing bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4083 GLUSTER_VOLUME_REMOVE_BRICKS_STOP_FAILED Error Failed to stop remove bricks from Volume USD{glusterVolumeName} of cluster USD{clusterName} 4084 GLUSTER_VOLUME_REMOVE_BRICKS_COMMIT Info Gluster volume USD{glusterVolumeName} remove bricks committed on cluster USD{clusterName}. USD{NoOfBricks} brick(s) removed from volume USD{glusterVolumeName}. 4085 GLUSTER_VOLUME_REMOVE_BRICKS_COMMIT_FAILED Error Gluster volume USD{glusterVolumeName} remove bricks could not be commited on cluster USD{clusterName} 4086 GLUSTER_BRICK_STATUS_CHANGED Warning Detected change in status of brick USD{brickpath} of volume USD{glusterVolumeName} of cluster USD{clusterName} from USD{oldValue} to USD{newValue} via USD{source}. 4087 GLUSTER_VOLUME_REBALANCE_FINISHED Info USD{action} USD{status} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4088 GLUSTER_VOLUME_MIGRATE_BRICK_DATA_FINISHED Info USD{action} USD{status} for brick(s) on volume USD{glusterVolumeName} of cluster USD{clusterName}. Please review to abort or commit. 4089 GLUSTER_VOLUME_REBALANCE_START_DETECTED_FROM_CLI Info Detected start of rebalance on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. 4090 START_REMOVING_GLUSTER_VOLUME_BRICKS_DETECTED_FROM_CLI Info Detected start of brick removal for bricks USD{brick} on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. 4091 GLUSTER_VOLUME_REBALANCE_NOT_FOUND_FROM_CLI Warning Could not find information for rebalance on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. Marking it as unknown. 4092 REMOVE_GLUSTER_VOLUME_BRICKS_NOT_FOUND_FROM_CLI Warning Could not find information for remove brick on volume USD{glusterVolumeName} of Cluster USD{ClusterName} from CLI. Marking it as unknown. 4093 GLUSTER_VOLUME_DETAILS_REFRESH Info Refreshed details of the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4094 GLUSTER_VOLUME_DETAILS_REFRESH_FAILED Error Failed to refresh the details of volume USD{glusterVolumeName} of cluster USD{clusterName}. 4095 GLUSTER_HOST_UUID_ALREADY_EXISTS Error Gluster UUID of host USD{VdsName} on Cluster USD{ClusterName} already exists. 4096 USER_FORCE_SELECTED_SPM_STOP_FAILED Error Failed to force select USD{VdsName} as the SPM due to a failure to stop the current SPM. 4097 GLUSTER_GEOREP_SESSION_DELETED_FROM_CLI Warning Detected deletion of geo-replication session USD{geoRepSessionKey} from volume USD{glusterVolumeName} of cluster USD{clusterName} 4098 GLUSTER_GEOREP_SESSION_DETECTED_FROM_CLI Warning Detected new geo-replication session USD{geoRepSessionKey} for volume USD{glusterVolumeName} of cluster USD{clusterName}. Adding it to engine. 4099 GLUSTER_GEOREP_SESSION_REFRESH Info Refreshed geo-replication sessions for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4100 GLUSTER_GEOREP_SESSION_REFRESH_FAILED Error Failed to refresh geo-replication sessions for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4101 GEOREP_SESSION_STOP Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been stopped. 4102 GEOREP_SESSION_STOP_FAILED Error Failed to stop geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4103 GEOREP_SESSION_DELETED Info Geo-replication session deleted on volume USD{glusterVolumeName} of cluster USD{clusterName} 4104 GEOREP_SESSION_DELETE_FAILED Error Failed to delete geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4105 GLUSTER_GEOREP_CONFIG_SET Info Configuration USD{key} has been set to USD{value} on the geo-rep session USD{geoRepSessionKey}. 4106 GLUSTER_GEOREP_CONFIG_SET_FAILED Error Failed to set the configuration USD{key} to USD{value} on geo-rep session USD{geoRepSessionKey}. 4107 GLUSTER_GEOREP_CONFIG_LIST Info Refreshed configuration options for geo-replication session USD{geoRepSessionKey} 4108 GLUSTER_GEOREP_CONFIG_LIST_FAILED Error Failed to refresh configuration options for geo-replication session USD{geoRepSessionKey} 4109 GLUSTER_GEOREP_CONFIG_SET_DEFAULT Info Configuration of USD{key} of session USD{geoRepSessionKey} reset to its default value . 4110 GLUSTER_GEOREP_CONFIG_SET_DEFAULT_FAILED Error Failed to set USD{key} of session USD{geoRepSessionKey} to its default value. 4111 GLUSTER_VOLUME_SNAPSHOT_DELETED Info Gluster volume snapshot USD{snapname} deleted. 4112 GLUSTER_VOLUME_SNAPSHOT_DELETE_FAILED Error Failed to delete gluster volume snapshot USD{snapname}. 4113 GLUSTER_VOLUME_ALL_SNAPSHOTS_DELETED Info Deleted all the gluster volume snapshots for the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4114 GLUSTER_VOLUME_ALL_SNAPSHOTS_DELETE_FAILED Error Failed to delete all the gluster volume snapshots for the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4115 GLUSTER_VOLUME_SNAPSHOT_ACTIVATED Info Activated the gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4116 GLUSTER_VOLUME_SNAPSHOT_ACTIVATE_FAILED Error Failed to activate the gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4117 GLUSTER_VOLUME_SNAPSHOT_DEACTIVATED Info De-activated the gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4118 GLUSTER_VOLUME_SNAPSHOT_DEACTIVATE_FAILED Error Failed to de-activate gluster volume snapshot USD{snapname} on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4119 GLUSTER_VOLUME_SNAPSHOT_RESTORED Info Restored the volume USD{glusterVolumeName} of cluster USD{clusterName} to the state of gluster volume snapshot USD{snapname}. 4120 GLUSTER_VOLUME_SNAPSHOT_RESTORE_FAILED Error Failed to restore the volume USD{glusterVolumeName} of cluster USD{clusterName} to the state of gluster volume snapshot USD{snapname}. 4121 GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATED Info Updated Gluster volume snapshot configuration(s). 4122 GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATE_FAILED Error Failed to update gluster volume snapshot configuration(s). 4123 GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATE_FAILED_PARTIALLY Error Failed to update gluster volume snapshot configuration(s) USD{failedSnapshotConfigs}. 4124 NEW_STORAGE_DEVICE_DETECTED Info Found new storage device USD{storageDevice} on host USD{VdsName}, and added it to engine DB." 4125 STORAGE_DEVICE_REMOVED_FROM_THE_HOST Info Detected deletion of storage device USD{storageDevice} on host USD{VdsName}, and deleting it from engine DB." 4126 SYNC_STORAGE_DEVICES_IN_HOST Info Manually synced the storage devices from host USD{VdsName} 4127 SYNC_STORAGE_DEVICES_IN_HOST_FAILED Error Failed to sync storage devices from host USD{VdsName} 4128 GEOREP_OPTION_SET_FROM_CLI Warning Detected new option USD{key} 4129 GEOREP_OPTION_CHANGED_FROM_CLI Warning Detected change in value of option USD{key} from USD{oldValue} to USD{value} for geo-replication session on volume USD{glusterVolumeName} of cluster USD{ClusterName}, and updated it to engine. 4130 GLUSTER_MASTER_VOLUME_STOP_FAILED_DURING_SNAPSHOT_RESTORE Error Could not stop master volume USD{glusterVolumeName} of cluster USD{clusterName} during snapshot restore. 4131 GLUSTER_MASTER_VOLUME_SNAPSHOT_RESTORE_FAILED Error Could not restore master volume USD{glusterVolumeName} of cluster USD{clusterName}. 4132 GLUSTER_VOLUME_SNAPSHOT_CREATED Info Snapshot USD{snapname} created for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4133 GLUSTER_VOLUME_SNAPSHOT_CREATE_FAILED Error Could not create snapshot for volume USD{glusterVolumeName} of cluster USD{clusterName}. 4134 GLUSTER_VOLUME_SNAPSHOT_SCHEDULED Info Snapshots scheduled on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4135 GLUSTER_VOLUME_SNAPSHOT_SCHEDULE_FAILED Error Failed to schedule snapshots on the volume USD{glusterVolumeName} of cluster USD{clusterName}. 4136 GLUSTER_VOLUME_SNAPSHOT_RESCHEDULED Info Rescheduled snapshots on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4137 GLUSTER_VOLUME_SNAPSHOT_RESCHEDULE_FAILED Error Failed to reschedule snapshots on volume USD{glusterVolumeName} of cluster USD{clusterName}. 4138 CREATE_GLUSTER_BRICK Info Brick USD{brickName} created successfully on host USD{vdsName} of cluster USD{clusterName}. 4139 CREATE_GLUSTER_BRICK_FAILED Error Failed to create brick USD{brickName} on host USD{vdsName} of cluster USD{clusterName}. 4140 GLUSTER_GEO_REP_PUB_KEY_FETCH_FAILED Error Failed to fetch public keys. 4141 GLUSTER_GET_PUB_KEY Info Public key fetched. 4142 GLUSTER_GEOREP_PUBLIC_KEY_WRITE_FAILED Error Failed to write public keys to USD{VdsName} 4143 GLUSTER_WRITE_PUB_KEYS Info Public keys written to USD{VdsName} 4144 GLUSTER_GEOREP_SETUP_MOUNT_BROKER_FAILED Error Failed to setup geo-replication mount broker for user USD{geoRepUserName} on the slave volume USD{geoRepSlaveVolumeName}. 4145 GLUSTER_SETUP_GEOREP_MOUNT_BROKER Info Geo-replication mount broker has been setup for user USD{geoRepUserName} on the slave volume USD{geoRepSlaveVolumeName}. 4146 GLUSTER_GEOREP_SESSION_CREATE_FAILED Error Failed to create geo-replication session between master volume : USD{glusterVolumeName} of cluster USD{clusterName} and slave volume : USD{geoRepSlaveVolumeName} for the user USD{geoRepUserName}. 4147 CREATE_GLUSTER_VOLUME_GEOREP_SESSION Info Created geo-replication session between master volume : USD{glusterVolumeName} of cluster USD{clusterName} and slave volume : USD{geoRepSlaveVolumeName} for the user USD{geoRepUserName}. 4148 GLUSTER_VOLUME_SNAPSHOT_SOFT_LIMIT_REACHED Info Gluster Volume Snapshot soft limit reached for the volume USD{glusterVolumeName} on cluster USD{clusterName}. 4149 HOST_FEATURES_INCOMPATIBILE_WITH_CLUSTER Error Host USD{VdsName} does not comply with the list of features supported by cluster USD{ClusterName}. USD{UnSupportedFeature} is not supported by the Host 4150 GLUSTER_VOLUME_SNAPSHOT_SCHEDULE_DELETED Info Snapshot schedule deleted for volume USD{glusterVolumeName} of USD{clusterName}. 4151 GLUSTER_BRICK_STATUS_DOWN Info Status of brick USD{brickpath} of volume USD{glusterVolumeName} on cluster USD{ClusterName} is down. 4152 GLUSTER_VOLUME_SNAPSHOT_DETECTED_NEW Info Found new gluster volume snapshot USD{snapname} for volume USD{glusterVolumeName} on cluster USD{ClusterName}, and added it to engine DB." 4153 GLUSTER_VOLUME_SNAPSHOT_DELETED_FROM_CLI Info Detected deletion of gluster volume snapshot USD{snapname} for volume USD{glusterVolumeName} on cluster USD{ClusterName}, and deleting it from engine DB." 4154 GLUSTER_VOLUME_SNAPSHOT_CLUSTER_CONFIG_DETECTED_NEW Info Found new gluster volume snapshot configuration USD{snapConfigName} with value USD{snapConfigValue} on cluster USD{ClusterName}, and added it to engine DB." 4155 GLUSTER_VOLUME_SNAPSHOT_VOLUME_CONFIG_DETECTED_NEW Info Found new gluster volume snapshot configuration USD{snapConfigName} with value USD{snapConfigValue} for volume USD{glusterVolumeName} on cluster USD{ClusterName}, and added it to engine DB." 4156 GLUSTER_VOLUME_SNAPSHOT_HARD_LIMIT_REACHED Info Gluster Volume Snapshot hard limit reached for the volume USD{glusterVolumeName} on cluster USD{clusterName}. 4157 GLUSTER_CLI_SNAPSHOT_SCHEDULE_DISABLE_FAILED Error Failed to disable gluster CLI based snapshot schedule on cluster USD{clusterName}. 4158 GLUSTER_CLI_SNAPSHOT_SCHEDULE_DISABLED Info Disabled gluster CLI based scheduling successfully on cluster USD{clusterName}. 4159 SET_UP_PASSWORDLESS_SSH Info Password-less SSH has been setup for user USD{geoRepUserName} on the nodes of remote volume USD{geoRepSlaveVolumeName} from the nodes of the volume USD{glusterVolumeName}. 4160 SET_UP_PASSWORDLESS_SSH_FAILED Error Failed to setup Passwordless ssh for user USD{geoRepUserName} on the nodes of remote volume USD{geoRepSlaveVolumeName} from the nodes of the volume USD{glusterVolumeName}. 4161 GLUSTER_VOLUME_TYPE_UNSUPPORTED Warning Detected a volume USD{glusterVolumeName} with type USD{glusterVolumeType} on cluster USD{Cluster} and it is not fully supported by engine. 4162 GLUSTER_VOLUME_BRICK_REPLACED Info Replaced brick 'USD{brick}' with new brick 'USD{newBrick}' of Gluster Volume USD{glusterVolumeName} on cluster USD{clusterName} 4163 GLUSTER_SERVER_STATUS_DISCONNECTED Info Gluster server USD{vdsName} set to DISCONNECTED on cluster USD{clusterName}. 4164 GLUSTER_STORAGE_DOMAIN_SYNC_FAILED Info Failed to synchronize data from storage domain USD{storageDomainName} to remote location. 4165 GLUSTER_STORAGE_DOMAIN_SYNCED Info Successfully synchronized data from storage domain USD{storageDomainName} to remote location. 4166 GLUSTER_STORAGE_DOMAIN_SYNC_STARTED Info Successfully started data synchronization data from storage domain USD{storageDomainName} to remote location. 4167 STORAGE_DOMAIN_DR_DELETED Error Deleted the data synchronization schedule for storage domain USD{storageDomainName} as the underlying geo-replication session USD{geoRepSessionKey} has been deleted. 4168 GLUSTER_WEBHOOK_ADDED Info Added webhook on USD{clusterName} 4169 GLUSTER_WEBHOOK_ADD_FAILED Error Failed to add webhook on USD{clusterName} 4170 GLUSTER_VOLUME_RESET_BRICK_FAILED Error 4171 GLUSTER_VOLUME_BRICK_RESETED Info 4172 GLUSTER_VOLUME_CONFIRMED_SPACE_LOW Warning Warning! Low confirmed free space on gluster volume USD{glusterVolumeName} 4436 GLUSTER_SERVER_ADD_FAILED Error Failed to add host USD{VdsName} into Cluster USD{ClusterName}. USD{ErrorMessage} 4437 GLUSTER_SERVERS_LIST_FAILED Error Failed to fetch gluster peer list from server USD{VdsName} on Cluster USD{ClusterName}. USD{ErrorMessage} 4595 GLUSTER_VOLUME_GEO_REP_START_FAILED_EXCEPTION Error Failed to start geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4596 GLUSTER_VOLUME_GEO_REP_START Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been started. 4597 GLUSTER_VOLUME_GEO_REP_PAUSE_FAILED Error Failed to pause geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4598 GLUSTER_VOLUME_GEO_REP_RESUME_FAILED Error Failed to resume geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} 4599 GLUSTER_VOLUME_GEO_REP_RESUME Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been resumed. 4600 GLUSTER_VOLUME_GEO_REP_PAUSE Info Geo-replication session on volume USD{glusterVolumeName} of cluster USD{clusterName} has been paused. 9000 VDS_ALERT_FENCE_IS_NOT_CONFIGURED Info Failed to verify Power Management configuration for Host USD{VdsName}. 9001 VDS_ALERT_FENCE_TEST_FAILED Info Power Management test failed for Host USD{VdsName}.USD{Reason} 9002 VDS_ALERT_FENCE_OPERATION_FAILED Info Failed to power fence host USD{VdsName}. Please check the host status and it's power management settings, and then manually reboot it and click "Confirm Host Has Been Rebooted" 9003 VDS_ALERT_FENCE_OPERATION_SKIPPED Info Host USD{VdsName} became non responsive. Fence operation skipped as the system is still initializing and this is not a host where hosted engine was running on previously. 9004 VDS_ALERT_FENCE_NO_PROXY_HOST Info There is no other host in the data center that can be used to test the power management settings. 9005 VDS_ALERT_FENCE_STATUS_VERIFICATION_FAILED Info Failed to verify Host USD{Host} USD{Status} status, Please USD{Status} Host USD{Host} manually. 9006 CANNOT_HIBERNATE_RUNNING_VMS_AFTER_CLUSTER_CPU_UPGRADE Warning Hibernation of VMs after CPU upgrade of Cluster USD{Cluster} is not supported. Please stop and restart those VMs in case you wish to hibernate them 9007 VDS_ALERT_SECONDARY_AGENT_USED_FOR_FENCE_OPERATION Info Secondary fence agent was used to USD{Operation} Host USD{VdsName} 9008 VDS_HOST_NOT_RESPONDING_CONNECTING Warning Host USD{VdsName} is not responding. It will stay in Connecting state for a grace period of USD{Seconds} seconds and after that an attempt to fence the host will be issued. 9009 VDS_ALERT_PM_HEALTH_CHECK_FENCE_AGENT_NON_RESPONSIVE Info Health check on Host USD{VdsName} indicates that Fence-Agent USD{AgentId} is non-responsive. 9010 VDS_ALERT_PM_HEALTH_CHECK_START_MIGHT_FAIL Info Health check on Host USD{VdsName} indicates that future attempts to Start this host using Power-Management are expected to fail. 9011 VDS_ALERT_PM_HEALTH_CHECK_STOP_MIGHT_FAIL Info Health check on Host USD{VdsName} indicates that future attempts to Stop this host using Power-Management are expected to fail. 9012 VDS_ALERT_PM_HEALTH_CHECK_RESTART_MIGHT_FAIL Info Health check on Host USD{VdsName} indicates that future attempts to Restart this host using Power-Management are expected to fail. 9013 VDS_ALERT_FENCE_OPERATION_SKIPPED_BROKEN_CONNECTIVITY Info Host USD{VdsName} became non responsive and was not restarted due to Fencing Policy: USD{Percents} percents of the Hosts in the Cluster have connectivity issues. 9014 VDS_ALERT_NOT_RESTARTED_DUE_TO_POLICY Info Host USD{VdsName} became non responsive and was not restarted due to the Cluster Fencing Policy. 9015 VDS_ALERT_FENCE_DISABLED_BY_CLUSTER_POLICY Info Host USD{VdsName} became Non Responsive and was not restarted due to disabled fencing in the Cluster Fencing Policy. 9016 FENCE_DISABLED_IN_CLUSTER_POLICY Info Fencing is disabled in Fencing Policy of the Cluster USD{ClusterName}, so HA VMs running on a non-responsive host will not be restarted elsewhere. 9017 FENCE_OPERATION_STARTED Info Power management USD{Action} of Host USD{VdsName} initiated. 9018 FENCE_OPERATION_SUCCEEDED Info Power management USD{Action} of Host USD{VdsName} succeeded. 9019 FENCE_OPERATION_FAILED Error Power management USD{Action} of Host USD{VdsName} failed. 9020 FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED Info Executing power management USD{Action} on Host USD{Host} using Proxy Host USD{ProxyHost} and Fence Agent USD{AgentType}:USD{AgentIp}. 9021 FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED Warning Execution of power management USD{Action} on Host USD{Host} using Proxy Host USD{ProxyHost} and Fence Agent USD{AgentType}:USD{AgentIp} failed. 9022 ENGINE_NO_FULL_BACKUP Info There is no full backup available, please run engine-backup to prevent data loss in case of corruption. 9023 ENGINE_NO_WARM_BACKUP Info Full backup was created on USD{Date} and it's too old. Please run engine-backup to prevent data loss in case of corruption. 9024 ENGINE_BACKUP_STARTED Normal Engine backup started. 9025 ENGINE_BACKUP_COMPLETED Normal Engine backup completed successfully. 9026 ENGINE_BACKUP_FAILED Error Engine backup failed. 9028 VDS_ALERT_NO_PM_CONFIG_FENCE_OPERATION_SKIPPED Info Host USD{VdsName} became non responsive. It has no power management configured. Please check the host status, manually reboot it, and click "Confirm Host Has Been Rebooted" 9500 TASK_STOPPING_ASYNC_TASK Info Stopping async task USD{CommandName} that started at USD{Date} 9501 TASK_CLEARING_ASYNC_TASK Info Clearing asynchronous task USD{CommandName} that started at USD{Date} 9506 USER_ACTIVATE_STORAGE_DOMAIN_FAILED_ASYNC Warning Failed to autorecover Storage Domain USD{StorageDomainName} (Data Center USD{StoragePoolName}). 9600 IMPORTEXPORT_IMPORT_VM_INVALID_INTERFACES Warning While importing VM USD{EntityName}, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster or are missing a suitable VM network interface profile. Network Name was not set in the Interface/s USD{Interfaces}. 9601 VDS_SET_NON_OPERATIONAL_VM_NETWORK_IS_BRIDGELESS Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} networks, the following VM networks are non-VM networks: 'USD{Networks}'. The host will become NonOperational. 9602 HA_VM_FAILED Error Highly Available VM USD{VmName} failed. It will be restarted automatically. 9603 HA_VM_RESTART_FAILED Error Restart of the Highly Available VM USD{VmName} failed. 9604 EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} emulated machine. The cluster emulated machine is USD{clusterEmulatedMachines} and the host emulated machines are USD{hostSupportedEmulatedMachines}. 9605 EXCEEDED_MAXIMUM_NUM_OF_RESTART_HA_VM_ATTEMPTS Error Highly Available VM USD{VmName} could not be restarted automatically, exceeded the maximum number of attempts. 9606 IMPORTEXPORT_SNAPSHOT_VM_INVALID_INTERFACES Warning While previewing a snapshot of VM USD{EntityName}, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster. Network Name was not set in the Interface/s USD{Interfaces}. 9607 ADD_VM_FROM_SNAPSHOT_INVALID_INTERFACES Warning While adding vm USD{EntityName} from snapshot, the Network/s USD{Networks} were found to be Non-VM Networks or do not exist in Cluster. Network Name was not set in the Interface/s USD{Interfaces}. 9608 RNG_SOURCES_INCOMPATIBLE_WITH_CLUSTER Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} Random Number Generator sources. The Hosts supported sources are: USD{hostSupportedRngSources}; and the cluster requirements are: USD{clusterRequiredRngSources}. 9609 EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL Warning Host USD{VdsName} does not comply with the cluster USD{ClusterName} emulated machines. The current cluster compatibility level supports USD{clusterEmulatedMachines} and the host emulated machines are USD{hostSupportedEmulatedMachines}. 9610 MIXING_RHEL_VERSIONS_IN_CLUSTER Warning Not possible to mix RHEL 6.x and 7.x hosts in one cluster. Tried adding USD{addingRhel} host to a cluster with USD{previousRhel} hosts. 9611 COLD_REBOOT_VM_DOWN Info VM USD{VmName} is down as a part of cold reboot process 9612 COLD_REBOOT_FAILED Error Cold reboot of VM USD{VmName} failed 9613 EXCEEDED_MAXIMUM_NUM_OF_COLD_REBOOT_VM_ATTEMPTS Error VM USD{VmName} could not be rebooted, exceeded the maximum number of attempts. 9700 DWH_STARTED Info ETL Service started. 9701 DWH_STOPPED Info ETL Service stopped. 9704 DWH_ERROR Error Error in ETL Service. 9801 EXTERNAL_EVENT_NORMAL Info An external event with NORMAL severity has been added. 9802 EXTERNAL_EVENT_WARNING Warning An external event with WARNING severity has been added. 9803 EXTERNAL_EVENT_ERROR Error An external event with ERROR severity has been added. 9804 EXTERNAL_ALERT Info An external event with ALERT severity has been added. 9901 WATCHDOG_EVENT Warning Watchdog event (USD{wdaction}) triggered on USD{VmName} at USD{wdevent} (host time). 9910 USER_ADD_CLUSTER_POLICY Info Scheduling Policy USD{ClusterPolicy} was added. (User: USD{UserName}) 9911 USER_FAILED_TO_ADD_CLUSTER_POLICY Error Failed to add Scheduling Policy: USD{ClusterPolicy}. (User: USD{UserName}) 9912 USER_UPDATE_CLUSTER_POLICY Info Scheduling Policy USD{ClusterPolicy} was updated. (User: USD{UserName}) 9913 USER_FAILED_TO_UPDATE_CLUSTER_POLICY Error Failed to update Scheduling Policy: USD{ClusterPolicy}. (User: USD{UserName}) 9914 USER_REMOVE_CLUSTER_POLICY Info Scheduling Policy USD{ClusterPolicy} was removed. (User: USD{UserName}) 9915 USER_FAILED_TO_REMOVE_CLUSTER_POLICY Error Failed to remove Scheduling Policy: USD{ClusterPolicy}. (User: USD{UserName}) 9920 FAILED_TO_CONNECT_TO_SCHEDULER_PROXY Error Failed to connect to external scheduler proxy. External filters, scoring functions and load balancing will not be performed. 10000 VDS_UNTRUSTED Error Host USD{VdsName} was set to non-operational. Host is not trusted by the attestation service. 10001 USER_UPDATE_VM_FROM_TRUSTED_TO_UNTRUSTED Warning The VM USD{VmName} was updated from trusted cluster to non-trusted cluster. 10002 USER_UPDATE_VM_FROM_UNTRUSTED_TO_TRUSTED Warning The VM USD{VmName} was updated from non-trusted cluster to trusted cluster. 10003 IMPORTEXPORT_IMPORT_VM_FROM_TRUSTED_TO_UNTRUSTED Warning The VM USD{VmName} was created in trusted cluster and imported into a non-trusted cluster 10004 IMPORTEXPORT_IMPORT_VM_FROM_UNTRUSTED_TO_TRUSTED Warning The VM USD{VmName} was created in non-trusted cluster and imported into a trusted cluster 10005 USER_ADD_VM_FROM_TRUSTED_TO_UNTRUSTED Warning The VM USD{VmName} was created in an untrusted cluster. It was originated from the Template USD{VmTemplateName} which was created in a trusted cluster. 10006 USER_ADD_VM_FROM_UNTRUSTED_TO_TRUSTED Warning The VM USD{VmName} was created in a trusted cluster. It was originated from the Template USD{VmTemplateName} which was created in an untrusted cluster. 10007 IMPORTEXPORT_IMPORT_TEMPLATE_FROM_TRUSTED_TO_UNTRUSTED Warning The Template USD{VmTemplateName} was created in trusted cluster and imported into a non-trusted cluster 10008 IMPORTEXPORT_IMPORT_TEMPLATE_FROM_UNTRUSTED_TO_TRUSTED Warning The Template USD{VmTemplateName} was created in non-trusted cluster and imported into a trusted cluster 10009 USER_ADD_VM_TEMPLATE_FROM_TRUSTED_TO_UNTRUSTED Warning The non-trusted Template USD{VmTemplateName} was created from trusted Vm USD{VmName}. 10010 USER_ADD_VM_TEMPLATE_FROM_UNTRUSTED_TO_TRUSTED Warning The trusted template USD{VmTemplateName} was created from non-trusted Vm USD{VmName}. 10011 USER_UPDATE_VM_TEMPLATE_FROM_TRUSTED_TO_UNTRUSTED Warning The Template USD{VmTemplateName} was updated from trusted cluster to non-trusted cluster. 10012 USER_UPDATE_VM_TEMPLATE_FROM_UNTRUSTED_TO_TRUSTED Warning The Template USD{VmTemplateName} was updated from non-trusted cluster to trusted cluster. 10013 IMPORTEXPORT_GET_EXTERNAL_VMS_NOT_IN_DOWN_STATUS Warning The following VMs retrieved from external server USD{URL} are not in down status: USD{Vms}. 10100 USER_ADDED_NETWORK_QOS Info Network QoS USD{QosName} was added. (User: USD{UserName}) 10101 USER_FAILED_TO_ADD_NETWORK_QOS Error Failed to add Network QoS USD{QosName}. (User: USD{UserName}) 10102 USER_REMOVED_NETWORK_QOS Info Network QoS USD{QosName} was removed. (User: USD{UserName}) 10103 USER_FAILED_TO_REMOVE_NETWORK_QOS Error Failed to remove Network QoS USD{QosName}. (User: USD{UserName}) 10104 USER_UPDATED_NETWORK_QOS Info Network QoS USD{QosName} was updated. (User: USD{UserName}) 10105 USER_FAILED_TO_UPDATE_NETWORK_QOS Error Failed to update Network QoS USD{QosName}. (User: USD{UserName}) 10110 USER_ADDED_QOS Info QoS USD{QoSName} was added. (User: USD{UserName}) 10111 USER_FAILED_TO_ADD_QOS Error Failed to add QoS USD{QoSName}. (User: USD{UserName}) 10112 USER_REMOVED_QOS Info QoS USD{QoSName} was removed. (User: USD{UserName}) 10113 USER_FAILED_TO_REMOVE_QOS Error Failed to remove QoS USD{QoSName}. (User: USD{UserName}) 10114 USER_UPDATED_QOS Info QoS USD{QoSName} was updated. (User: USD{UserName}) 10115 USER_FAILED_TO_UPDATE_QOS Error Failed to update QoS USD{QoSName}. (User: USD{UserName}) 10120 USER_ADDED_DISK_PROFILE Info Disk Profile USD{ProfileName} was successfully added (User: USD{UserName}). 10121 USER_FAILED_TO_ADD_DISK_PROFILE Error Failed to add Disk Profile (User: USD{UserName}). 10122 USER_REMOVED_DISK_PROFILE Info Disk Profile USD{ProfileName} was successfully removed (User: USD{UserName}). 10123 USER_FAILED_TO_REMOVE_DISK_PROFILE Error Failed to remove Disk Profile USD{ProfileName} (User: USD{UserName}). 10124 USER_UPDATED_DISK_PROFILE Info Disk Profile USD{ProfileName} was successfully updated (User: USD{UserName}). 10125 USER_FAILED_TO_UPDATE_DISK_PROFILE Error Failed to update Disk Profile USD{ProfileName} (User: USD{UserName}). 10130 USER_ADDED_CPU_PROFILE Info CPU Profile USD{ProfileName} was successfully added (User: USD{UserName}). 10131 USER_FAILED_TO_ADD_CPU_PROFILE Error Failed to add CPU Profile (User: USD{UserName}). 10132 USER_REMOVED_CPU_PROFILE Info CPU Profile USD{ProfileName} was successfully removed (User: USD{UserName}). 10133 USER_FAILED_TO_REMOVE_CPU_PROFILE Error Failed to remove CPU Profile USD{ProfileName} (User: USD{UserName}). 10134 USER_UPDATED_CPU_PROFILE Info CPU Profile USD{ProfileName} was successfully updated (User: USD{UserName}). 10135 USER_FAILED_TO_UPDATE_CPU_PROFILE Error Failed to update CPU Profile USD{ProfileName} (User: USD{UserName}). 10200 USER_UPDATED_MOM_POLICIES Info Mom policy was updated on host USD{VdsName}. 10201 USER_FAILED_TO_UPDATE_MOM_POLICIES Warning Mom policy could not be updated on host USD{VdsName}. 10250 PM_POLICY_UP_TO_MAINTENANCE Info Host USD{Host} is not currently needed, activating maintenance mode in preparation for shutdown. 10251 PM_POLICY_MAINTENANCE_TO_DOWN Info Host USD{Host} is not currently needed, shutting down. 10252 PM_POLICY_TO_UP Info Reactivating host USD{Host} according to the current power management policy. 10300 CLUSTER_ALERT_HA_RESERVATION Info Cluster USD{ClusterName} failed the HA Reservation check, HA VMs on host(s): USD{Hosts} will fail to migrate in case of a failover, consider adding resources or shutting down unused VMs. 10301 CLUSTER_ALERT_HA_RESERVATION_DOWN Info Cluster USD{ClusterName} passed the HA Reservation check. 10350 USER_ADDED_AFFINITY_GROUP Info Affinity Group USD{affinityGroupName} was added. (User: USD{UserName}) 10351 USER_FAILED_TO_ADD_AFFINITY_GROUP Error Failed to add Affinity Group USD{affinityGroupName}. (User: USD{UserName}) 10352 USER_UPDATED_AFFINITY_GROUP Info Affinity Group USD{affinityGroupName} was updated. (User: USD{UserName}) 10353 USER_FAILED_TO_UPDATE_AFFINITY_GROUP Error Failed to update Affinity Group USD{affinityGroupName}. (User: USD{UserName}) 10354 USER_REMOVED_AFFINITY_GROUP Info Affinity Group USD{affinityGroupName} was removed. (User: USD{UserName}) 10355 USER_FAILED_TO_REMOVE_AFFINITY_GROUP Error Failed to remove Affinity Group USD{affinityGroupName}. (User: USD{UserName}) 10356 VM_TO_HOST_CONFLICT_IN_ENFORCING_POSITIVE_AND_NEGATIVE_AFFINITY Error The affinity groups: USD{AffinityGroups}, with hosts :USD{Hosts} and VMs : USD{Vms}, have VM to host conflicts between positive and negative enforcing affinity groups. 10357 VM_TO_HOST_CONFLICT_IN_POSITIVE_AND_NEGATIVE_AFFINITY Warning The affinity groups: USD{AffinityGroups}, with hosts: USD{Hosts} and VMs: USD{Vms}, have VM to host conflicts between positive and negative affinity groups. 10358 VM_TO_HOST_CONFLICTS_POSITIVE_VM_TO_VM_AFFINITY Warning The affinity groups: USD{AffinityGroups}, with hosts : USD{Hosts} and VMs: USD{Vms}, have conflicts between VM to host affinity and VM to VM positive affinity. 10359 VM_TO_HOST_CONFLICTS_NEGATIVE_VM_TO_VM_AFFINITY Warning The affinity groups: USD{AffinityGroups}, with hosts : USD{Hosts} and VMs: USD{Vms}, have conflicts between VM to host affinity and VM to VM negative affinity. 10360 NON_INTERSECTING_POSITIVE_HOSTS_AFFINITY_CONFLICTS Warning The affinity groups: USD{AffinityGroups}, with hosts : USD{Hosts} and VMs : USD{Vms} , have non intersecting positive hosts conflicts. 10361 VM_TO_VM_AFFINITY_CONFLICTS Error 10380 USER_ADDED_AFFINITY_LABEL Info Affinity Label USD{labelName} was added. (User: USD{UserName}) 10381 USER_FAILED_TO_ADD_AFFINITY_LABEL Error Failed to add Affinity Label USD{labelName}. (User: USD{UserName}) 10382 USER_UPDATED_AFFINITY_LABEL Info Affinity Label USD{labelName} was updated. (User: USD{UserName}) 10383 USER_FAILED_TO_UPDATE_AFFINITY_LABEL Error Failed to update Affinity Label USD{labelName}. (User: USD{UserName}) 10384 USER_REMOVED_AFFINITY_LABEL Info Affinity Label USD{labelName} was removed. (User: USD{UserName}) 10385 USER_FAILED_TO_REMOVE_AFFINITY_LABEL Error Failed to remove Affinity Label USD{labelName}. (User: USD{UserName}) 10400 ISCSI_BOND_ADD_SUCCESS Info iSCSI bond 'USD{IscsiBondName}' was successfully created in Data Center 'USD{StoragePoolName}'. 10401 ISCSI_BOND_ADD_FAILED Error Failed to create iSCSI bond 'USD{IscsiBondName}' in Data Center 'USD{StoragePoolName}'. 10402 ISCSI_BOND_EDIT_SUCCESS Info iSCSI bond 'USD{IscsiBondName}' was successfully updated. 10403 ISCSI_BOND_EDIT_FAILED Error Failed to update iSCSI bond 'USD{IscsiBondName}'. 10404 ISCSI_BOND_REMOVE_SUCCESS Info iSCSI bond 'USD{IscsiBondName}' was removed from Data Center 'USD{StoragePoolName}' 10405 ISCSI_BOND_REMOVE_FAILED Error Failed to remove iSCSI bond 'USD{IscsiBondName}' from Data Center 'USD{StoragePoolName}' 10406 ISCSI_BOND_EDIT_SUCCESS_WITH_WARNING Warning iSCSI bond 'USD{IscsiBondName}' was successfully updated but some of the hosts encountered connection issues. 10407 ISCSI_BOND_ADD_SUCCESS_WITH_WARNING Warning iSCSI bond 'USD{IscsiBondName}' was successfully created in Data Center 'USD{StoragePoolName}' but some of the hosts encountered connection issues. 10450 USER_SET_HOSTED_ENGINE_MAINTENANCE Info Hosted Engine HA maintenance mode was updated on host USD{VdsName}. 10451 USER_FAILED_TO_SET_HOSTED_ENGINE_MAINTENANCE Error Hosted Engine HA maintenance mode could not be updated on host USD{VdsName}. 10452 VDS_MAINTENANCE_MANUAL_HA Warning Host USD{VdsName} was switched to Maintenance mode, but Hosted Engine HA maintenance could not be enabled. Please enable it manually. 10453 USER_VDS_MAINTENANCE_MANUAL_HA Warning Host USD{VdsName} was switched to Maintenance mode by USD{UserName}, but Hosted Engine HA maintenance could not be enabled. Please enable it manually. 10454 VDS_ACTIVATE_MANUAL_HA Warning Host USD{VdsName} was activated by USD{UserName}, but the Hosted Engine HA service may still be in maintenance mode. If necessary, please correct this manually. 10455 VDS_ACTIVATE_MANUAL_HA_ASYNC Warning Host USD{VdsName} was autorecovered, but the Hosted Engine HA service may still be in maintenance mode. If necessary, please correct this manually. 10456 HOSTED_ENGINE_VM_IMPORT_SUCCEEDED Normal Hosted Engine VM was imported successfully 10460 HOSTED_ENGINE_DOMAIN_IMPORT_SUCCEEDED Normal Hosted Engine Storage Domain imported successfully 10461 HOSTED_ENGINE_DOMAIN_IMPORT_FAILED Error Failed to import the Hosted Engine Storage Domain 10500 EXTERNAL_SCHEDULER_PLUGIN_ERROR Error Running the external scheduler plugin 'USD{PluginName}' failed: 'USD{ErrorMessage}' 10501 EXTERNAL_SCHEDULER_ERROR Error Running the external scheduler failed: 'USD{ErrorMessage}' 10550 VM_SLA_POLICY_CPU Info VM USD{VmName} SLA Policy was set. CPU limit is set to USD{cpuLimit} 10551 VM_SLA_POLICY_STORAGE Info VM USD{VmName} SLA Policy was set. Storage policy changed for disks: [USD{diskList}] 10552 VM_SLA_POLICY_CPU_STORAGE Info VM USD{VmName} SLA Policy was set. CPU limit is set to USD{cpuLimit}. Storage policy changed for disks: [USD{diskList}] 10553 FAILED_VM_SLA_POLICY Error Failed to set SLA Policy to VM USD{VmName}. Underlying error message: USD{ErrorMessage} 10600 USER_REMOVE_AUDIT_LOG Info Event list message USD{AuditLogId} was removed by User USD{UserName}. 10601 USER_REMOVE_AUDIT_LOG_FAILED Error User USD{UserName} failed to remove event list message USD{AuditLogId}. 10602 USER_CLEAR_ALL_AUDIT_LOG_EVENTS Info All events were removed. (User: USD{UserName}) 10603 USER_CLEAR_ALL_AUDIT_LOG_EVENTS_FAILED Error Failed to remove all events. (User: USD{UserName}) 10604 USER_DISPLAY_ALL_AUDIT_LOG Info All events were displayed. (User: USD{UserName}) 10605 USER_DISPLAY_ALL_AUDIT_LOG_FAILED Error Failed to display all events. (User: USD{UserName}) 10606 USER_CLEAR_ALL_AUDIT_LOG_ALERTS Info All alerts were removed. (User: USD{UserName}) 10607 USER_CLEAR_ALL_AUDIT_LOG_ALERTS_FAILED Error Failed to remove all alerts. (User: USD{UserName}) 10700 MAC_POOL_ADD_SUCCESS Info MAC Pool 'USD{MacPoolName}' (id 10701 MAC_POOL_ADD_FAILED Error Failed to create MAC Pool 'USD{MacPoolName}'. (User: USD{UserName}) 10702 MAC_POOL_EDIT_SUCCESS Info MAC Pool 'USD{MacPoolName}' (id 10703 MAC_POOL_EDIT_FAILED Error Failed to update MAC Pool 'USD{MacPoolName}' (id 10704 MAC_POOL_REMOVE_SUCCESS Info MAC Pool 'USD{MacPoolName}' (id 10705 MAC_POOL_REMOVE_FAILED Error Failed to remove MAC Pool 'USD{MacPoolName}' (id 10750 CINDER_PROVIDER_ERROR Error An error occurred on Cinder provider: 'USD{CinderException}' 10751 CINDER_DISK_CONNECTION_FAILURE Error Failed to retrieve connection information for Cinder Disk 'USD{DiskAlias}'. 10752 CINDER_DISK_CONNECTION_VOLUME_DRIVER_UNSUPPORTED Error Unsupported volume driver for Cinder Disk 'USD{DiskAlias}'. 10753 USER_FINISHED_FAILED_REMOVE_CINDER_DISK Error Failed to remove disk USD{DiskAlias} from storage domain USD{StorageDomainName}. The following entity id could not be deleted from the Cinder provider 'USD{imageId}'. (User: USD{UserName}). 10754 USER_ADDED_LIBVIRT_SECRET Info Authentication Key USD{LibvirtSecretUUID} was added. (User: USD{UserName}). 10755 USER_FAILED_TO_ADD_LIBVIRT_SECRET Error Failed to add Authentication Key USD{LibvirtSecretUUID}. (User: USD{UserName}). 10756 USER_UPDATE_LIBVIRT_SECRET Info Authentication Key USD{LibvirtSecretUUID} was updated. (User: USD{UserName}). 10757 USER_FAILED_TO_UPDATE_LIBVIRT_SECRET Error Failed to update Authentication Key USD{LibvirtSecretUUID}. (User: USD{UserName}). 10758 USER_REMOVED_LIBVIRT_SECRET Info Authentication Key USD{LibvirtSecretUUID} was removed. (User: USD{UserName}). 10759 USER_FAILED_TO_REMOVE_LIBVIRT_SECRET Error Failed to remove Authentication Key USD{LibvirtSecretUUID}. (User: USD{UserName}). 10760 FAILED_TO_REGISTER_LIBVIRT_SECRET Error Failed to register Authentication Keys for storage domain USD{StorageDomainName} on host USD{VdsName}. 10761 FAILED_TO_UNREGISTER_LIBVIRT_SECRET Error Failed to unregister Authentication Keys for storage domain USD{StorageDomainName} on host USD{VdsName}. 10762 FAILED_TO_REGISTER_LIBVIRT_SECRET_ON_VDS Error Failed to register Authentication Keys on host USD{VdsName}. 10763 NO_LIBRBD_PACKAGE_AVAILABLE_ON_VDS Error Librbd1 package is not available on host USD{VdsName}, which is mandatory for using Cinder storage domains. 10764 FAILED_TO_FREEZE_VM Warning Failed to freeze guest filesystems on VM USD{VmName}. Note that using the created snapshot might cause data inconsistency. 10765 FAILED_TO_THAW_VM Warning Failed to thaw guest filesystems on VM USD{VmName}. The filesystems might be unresponsive until the VM is restarted. 10766 FREEZE_VM_INITIATED Normal Freeze of guest filesystems on VM USD{VmName} was initiated. 10767 FREEZE_VM_SUCCESS Normal Guest filesystems on VM USD{VmName} have been frozen successfully. 10768 THAW_VM_SUCCESS Normal Guest filesystems on VM USD{VmName} have been thawed successfully. 10769 USER_FAILED_TO_FREEZE_VM Warning Failed to freeze guest filesystems on USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 10770 USER_FAILED_TO_THAW_VM Warning Failed to thaw guest filesystems on USD{VmName} (Host: USD{VdsName}, User: USD{UserName}). 10771 VDS_CANNOT_CONNECT_TO_GLUSTERFS Error Host USD{VdsName} cannot connect to Glusterfs. Verify that glusterfs-cli package is installed on the host. 10780 AFFINITY_RULES_ENFORCEMENT_MANAGER_START Normal Affinity Rules Enforcement Manager started. 10781 AFFINITY_RULES_ENFORCEMENT_MANAGER_INTERVAL_REACHED Normal 10800 VM_ADD_HOST_DEVICES Info Host devices USD{NamesAdded} were attached to Vm USD{VmName} by User USD{UserName}. 10801 VM_REMOVE_HOST_DEVICES Info Host devices USD{NamesRemoved} were detached from Vm USD{VmName} by User USD{UserName}. 10802 VDS_BROKER_COMMAND_FAILURE Error VDSM USD{VdsName} command USD{CommandName} failed: USD{message} 10803 IRS_BROKER_COMMAND_FAILURE Error VDSM command USD{CommandName} failed: USD{message} 10804 VDS_UNKNOWN_HOST Error The address of host USD{VdsName} could not be determined 10810 SYSTEM_CHANGE_STORAGE_POOL_STATUS_UP_REPORTING_HOSTS Normal Data Center USD{StoragePoolName} status was changed to UP as some of its hosts are in status UP. 10811 SYSTEM_CHANGE_STORAGE_POOL_STATUS_NON_RESPONSIVE_NO_REPORTING_HOSTS Info Data Center USD{StoragePoolName} status was changed to Non Responsive as none of its hosts are in status UP. 10812 STORAGE_POOL_LOWER_THAN_ENGINE_HIGHEST_CLUSTER_LEVEL Info Data Center USD{StoragePoolName} compatibility version is USD{dcVersion}, which is lower than latest engine version USD{engineVersion}. Please upgrade your Data Center to latest version to successfully finish upgrade of your setup. 10900 HOST_SYNC_ALL_NETWORKS_FAILED Error Failed to sync all host USD{VdsName} networks 10901 HOST_SYNC_ALL_NETWORKS_FINISHED Info Managed to sync all host USD{VdsName} networks. 10902 PERSIST_HOST_SETUP_NETWORK_ON_HOST Info (USD{Sequence}/USD{Total}): Applying network's changes on host USD{VdsName}. (User: USD{UserName}) 10903 PERSIST_SETUP_NETWORK_ON_HOST_FINISHED Info (USD{Sequence}/USD{Total}): Successfully applied changes on host USD{VdsName}. (User: USD{UserName}) 10904 PERSIST_SETUP_NETWORK_ON_HOST_FAILED Error (USD{Sequence}/USD{Total}): Failed to apply changes on host USD{VdsName}. (User: USD{UserName}) 10905 CLUSTER_SYNC_ALL_NETWORKS_FAILED Error Failed to sync all cluster USD{ClusterName} networks 10906 CLUSTER_SYNC_ALL_NETWORKS_STARTED Info Started sync of all cluster USD{ClusterName} networks. 10910 NETWORK_REMOVE_NIC_FILTER_PARAMETER Info Network interface filter parameter (id USD{VmNicFilterParameterId}) was successfully removed by USD{UserName}. 10911 NETWORK_REMOVE_NIC_FILTER_PARAMETER_FAILED Error Failed to remove network interface filter parameter ((id USD{VmNicFilterParameterId}) by USD{UserName}. 10912 NETWORK_ADD_NIC_FILTER_PARAMETER Info Network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) was successfully added to Interface with id USD{VmInterfaceId} on VM USD{VmName} by USD{UserName}. 10913 NETWORK_ADD_NIC_FILTER_PARAMETER_FAILED Error Failed to add network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) to Interface with id USD{VmInterfaceId} on VM USD{VmName} by USD{UserName} by USD{UserName}. 10914 NETWORK_UPDATE_NIC_FILTER_PARAMETER Info Network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) on Interface with id USD{VmInterfaceId} on VM USD{VmName} was successfully updated by USD{UserName}. 10915 NETWORK_UPDATE_NIC_FILTER_PARAMETER_FAILED Error Failed to update network interface filter parameter USD{VmNicFilterParameterName} (id USD{VmNicFilterParameterId}) on Interface with id USD{VmInterfaceId} on VM USD{VmName} by USD{UserName}. 10916 MAC_ADDRESS_HAD_TO_BE_REALLOCATED Warning Some MAC addresses had to be reallocated because they are duplicate. 10917 MAC_ADDRESS_VIOLATES_NO_DUPLICATES_SETTING Error Duplicate MAC addresses had to be introduced into mac pool violating no duplicates setting. 10918 MAC_ADDRESS_COULDNT_BE_REALLOCATED Error Some MAC addresses had to be reallocated, but operation failed because of insufficient amount of free MACs. 10920 NETWORK_IMPORT_EXTERNAL_NETWORK Info Successfully initiated import of external network USD{NetworkName} from provider USD{ProviderName}. 10921 NETWORK_IMPORT_EXTERNAL_NETWORK_FAILED Error Failed to initiate external network USD{NetworkName} from provider USD{ProviderName}. 10922 NETWORK_IMPORT_EXTERNAL_NETWORK_INTERNAL Info 10923 NETWORK_IMPORT_EXTERNAL_NETWORK_INTERNAL_FAILED Error 10924 NETWORK_AUTO_DEFINE_NO_DEFAULT_EXTERNAL_PROVIDER Warning Cannot create auto-defined network connected to USD{NetworkName}. Cluster USD{ClusterName} does not have default external network provider. 11000 USER_ADD_EXTERNAL_JOB Info New external Job USD{description} was added by user USD{UserName} 11001 USER_ADD_EXTERNAL_JOB_FAILED Error Failed to add new external Job USD{description} 11500 FAULTY_MULTIPATHS_ON_HOST Warning Faulty multipath paths on host USD{VdsName} on devices: [USD{MpathGuids}] 11501 NO_FAULTY_MULTIPATHS_ON_HOST Normal No faulty multipath paths on host USD{VdsName} 11502 MULTIPATH_DEVICES_WITHOUT_VALID_PATHS_ON_HOST Warning Multipath devices without valid paths on host USD{VdsName} : [USD{MpathGuids}] 12000 MIGRATION_REASON_AFFINITY_ENFORCEMENT Info Affinity rules enforcement 12001 MIGRATION_REASON_LOAD_BALANCING Info Load balancing 12002 MIGRATION_REASON_HOST_IN_MAINTENANCE Info Host preparing for maintenance 12003 VM_MIGRATION_NOT_ALL_VM_NICS_WERE_PLUGGED_BACK Error After migration of USD{VmName}, following vm nics failed to be plugged back: USD{NamesOfNotRepluggedNics}. 12004 VM_MIGRATION_PLUGGING_VM_NICS_FAILED Error After migration of USD{VmName} vm nics failed to be plugged back. 12005 CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION Error Cannot update compatibility version of Vm/Template: [USD{VmName}], Message: USD{Message} 13000 DEPRECATED_API Warning Client from address "USD{ClientAddress}" is using version USD{ApiVersion} of the API, which has been \ 13001 DEPRECATED_IPTABLES_FIREWALL Warning Cluster USD{ClusterName} uses IPTables firewall, which has been deprecated in \
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/appe-event_codes
Chapter 5. DNS Operator in OpenShift Container Platform
Chapter 5. DNS Operator in OpenShift Container Platform The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods, enabling DNS-based Kubernetes Service discovery in OpenShift Container Platform. 5.1. DNS Operator The DNS Operator implements the dns API from the operator.openshift.io API group. The Operator deploys CoreDNS using a daemon set, creates a service for the daemon set, and configures the kubelet to instruct pods to use the CoreDNS service IP address for name resolution. Procedure The DNS Operator is deployed during installation with a Deployment object. Use the oc get command to view the deployment status: USD oc get -n openshift-dns-operator deployment/dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h Use the oc get command to view the state of the DNS Operator: USD oc get clusteroperator/dns Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m AVAILABLE , PROGRESSING and DEGRADED provide information about the status of the operator. AVAILABLE is True when at least 1 pod from the CoreDNS daemon set reports an Available status condition. 5.2. Changing the DNS Operator managementState DNS manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. The managementState of the DNS Operator is set to Managed by default, which means that the DNS Operator is actively managing its resources. You can change it to Unmanaged , which means the DNS Operator is not managing its resources. The following are use cases for changing the DNS Operator managementState : You are a developer and want to test a configuration change to see if it fixes an issue in CoreDNS. You can stop the DNS Operator from overwriting the fix by setting the managementState to Unmanaged . You are a cluster administrator and have reported an issue with CoreDNS, but need to apply a workaround until the issue is fixed. You can set the managementState field of the DNS Operator to Unmanaged to apply the workaround. Procedure Change managementState DNS Operator: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' 5.3. Controlling DNS pod placement The DNS Operator has two daemon sets: one for CoreDNS and one for managing the /etc/hosts file. The daemon set for /etc/hosts must run on every node host to add an entry for the cluster image registry to support pulling images. Security policies can prohibit communication between pairs of nodes, which prevents the daemon set for CoreDNS from running on every node. As a cluster administrator, you can use a custom node selector to configure the daemon set for CoreDNS to run or not run on certain nodes. Prerequisites You installed the oc CLI. You are logged in to the cluster with a user with cluster-admin privileges. Procedure To prevent communication between certain nodes, configure the spec.nodePlacement.nodeSelector API field: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a node selector that includes only control plane nodes in the spec.nodePlacement.nodeSelector API field: spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: "" To allow the daemon set for CoreDNS to run on nodes, configure a taint and toleration: Modify the DNS Operator object named default : USD oc edit dns.operator/default Specify a taint key and a toleration for the taint: spec: nodePlacement: tolerations: - effect: NoExecute key: "dns-only" operators: Equal value: abc tolerationSeconds: 3600 1 1 If the taint is dns-only , it can be tolerated indefinitely. You can omit tolerationSeconds . 5.4. View the default DNS Every new OpenShift Container Platform installation has a dns.operator named default . Procedure Use the oc describe command to view the default dns : USD oc describe dns.operator/default Example output Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS ... Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2 ... 1 The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. 2 The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. To find the service CIDR of your cluster, use the oc get command: USD oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}' Example output [172.30.0.0/16] 5.5. Using DNS forwarding You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf file in the following ways: Specify name servers for every zone. If the forwarded zone is the Ingress domain managed by OpenShift Container Platform, then the upstream name server must be authorized for the domain. Provide a list of upstream DNS servers. Change the default forwarding policy. Note A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers. Procedure Modify the DNS Operator object named default : USD oc edit dns.operator/default After you issue the command, the Operator creates and updates the config map named dns-default with additional server configuration blocks based on Server . If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers. Configuring DNS forwarding apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 upstreams: 7 - type: SystemResolvConf 8 - type: Network address: 1.2.3.4 9 port: 53 10 1 Must comply with the rfc6335 service name syntax. 2 Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local , is an invalid subdomain for the zones field. 3 Defines the policy to select upstream resolvers. Default value is Random . You can also use the values RoundRobin , and Sequential . 4 A maximum of 15 upstreams is allowed per forwardPlugin . 5 Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in /etc/resolv.conf . 6 Determines the order in which upstream servers are selected for querying. You can specify one of these values: Random , RoundRobin , or Sequential . The default value is Sequential . 7 Optional. You can use it to provide upstream resolvers. 8 You can specify two types of upstreams - SystemResolvConf and Network . SystemResolvConf configures the upstream to use /etc/resolv.conf and Network defines a Networkresolver . You can specify one or both. 9 If the specified type is Network , you must provide an IP address. The address field must be a valid IPv4 or IPv6 address. 10 If the specified type is Network , you can optionally provide a port. The port field must have a value between 1 and 65535 . If you do not specify a port for the upstream, by default port 853 is tried. Optional: When working in a highly regulated environment, you might need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that you can ensure additional DNS traffic and data privacy. Cluster administrators can configure transport layer security (TLS) for forwarded DNS queries. Configuring DNS forwarding with TLS apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10 1 Must comply with the rfc6335 service name syntax. 2 Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local , is an invalid subdomain for the zones field. The cluster domain, cluster.local , is an invalid subdomain for zones . 3 When configuring TLS for forwarded DNS queries, set the transport field to have the value TLS . By default, CoreDNS caches forwarded connections for 10 seconds. CoreDNS will hold a TCP connection open for those 10 seconds if no request is issued. With large clusters, ensure that your DNS server is aware that it might get many new connections to hold open because you can initiate a connection per node. Set up your DNS hierarchy accordingly to avoid performance issues. 4 When configuring TLS for forwarded DNS queries, this is a mandatory server name used as part of the server name indication (SNI) to validate the upstream TLS server certificate. 5 Defines the policy to select upstream resolvers. Default value is Random . You can also use the values RoundRobin , and Sequential . 6 Required. You can use it to provide upstream resolvers. A maximum of 15 upstreams entries are allowed per forwardPlugin entry. 7 Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in /etc/resolv.conf . 8 Network type indicates that this upstream resolver should handle forwarded requests separately from the upstream resolvers listed in /etc/resolv.conf . Only the Network type is allowed when using TLS and you must provide an IP address. 9 The address field must be a valid IPv4 or IPv6 address. 10 You can optionally provide a port. The port must have a value between 1 and 65535 . If you do not specify a port for the upstream, by default port 853 is tried. Note If servers is undefined or invalid, the config map only contains the default server. Verification View the config map: USD oc get configmap/dns-default -n openshift-dns -o yaml Sample DNS ConfigMap based on sample DNS apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns 1 Changes to the forwardPlugin triggers a rolling update of the CoreDNS daemon set. Additional resources For more information on DNS forwarding, see the CoreDNS forward documentation . 5.6. DNS Operator status You can inspect the status and view the details of the DNS Operator using the oc describe command. Procedure View the status of the DNS Operator: USD oc describe clusteroperators/dns 5.7. DNS Operator logs You can view DNS Operator logs by using the oc logs command. Procedure View the logs of the DNS Operator: USD oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator 5.8. Setting the CoreDNS log level You can configure the CoreDNS log level to determine the amount of detail in logged error messages. The valid values for CoreDNS log level are Normal , Debug , and Trace . The default logLevel is Normal . Note The errors plugin is always enabled. The following logLevel settings report different error responses: logLevel : Normal enables the "errors" class: log . { class error } . logLevel : Debug enables the "denial" class: log . { class denial error } . logLevel : Trace enables the "all" class: log . { class all } . Procedure To set logLevel to Debug , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Debug"}}' --type=merge To set logLevel to Trace , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Trace"}}' --type=merge Verification To ensure the desired log level was set, check the config map: USD oc get configmap/dns-default -n openshift-dns -o yaml 5.9. Setting the CoreDNS Operator log level Cluster administrators can configure the Operator log level to more quickly track down OpenShift DNS issues. The valid values for operatorLogLevel are Normal , Debug , and Trace . Trace has the most detailed information. The default operatorlogLevel is Normal . There are seven logging levels for issues: Trace, Debug, Info, Warning, Error, Fatal and Panic. After the logging level is set, log entries with that severity or anything above it will be logged. operatorLogLevel: "Normal" sets logrus.SetLogLevel("Info") . operatorLogLevel: "Debug" sets logrus.SetLogLevel("Debug") . operatorLogLevel: "Trace" sets logrus.SetLogLevel("Trace") . Procedure To set operatorLogLevel to Debug , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Debug"}}' --type=merge To set operatorLogLevel to Trace , enter the following command: USD oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Trace"}}' --type=merge
[ "oc get -n openshift-dns-operator deployment/dns-operator", "NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h", "oc get clusteroperator/dns", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m", "patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'", "oc edit dns.operator/default", "spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc edit dns.operator/default", "spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" operators: Equal value: abc tolerationSeconds: 3600 1", "oc describe dns.operator/default", "Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2", "oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'", "[172.30.0.0/16]", "oc edit dns.operator/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 upstreams: 7 - type: SystemResolvConf 8 - type: Network address: 1.2.3.4 9 port: 53 10", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10", "oc get configmap/dns-default -n openshift-dns -o yaml", "apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns", "oc describe clusteroperators/dns", "oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Debug\"}}' --type=merge", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Trace\"}}' --type=merge", "oc get configmap/dns-default -n openshift-dns -o yaml", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Debug\"}}' --type=merge", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Trace\"}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/dns-operator
Chapter 7. Requesting persistent storage for workspaces
Chapter 7. Requesting persistent storage for workspaces OpenShift Dev Spaces workspaces and workspace data are ephemeral and are lost when the workspace stops. To preserve the workspace state in persistent storage while the workspace is stopped, request a Kubernetes PersistentVolume (PV) for the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance. You can request a PV by using the devfile or a Kubernetes PersistentVolumeClaim (PVC). An example of a PV is the /projects/ directory of a workspace, which is mounted by default for non-ephemeral workspaces. Persistent Volumes come at a cost: attaching a persistent volume slows workspace startup. Warning Starting another, concurrently running workspace with a ReadWriteOnce PV might fail. Additional resources Red Hat OpenShift Documentation: Understanding persistent storage Kubernetes Documentation: Persistent Volumes 7.1. Requesting persistent storage in a devfile When a workspace requires its own persistent storage, request a PersistentVolume (PV) in the devfile, and OpenShift Dev Spaces will automatically manage the necessary PersistentVolumeClaims. Prerequisites You have not started the workspace. Procedure Add a volume component in the devfile: ... components: ... - name: <chosen_volume_name> volume: size: <requested_volume_size> G ... Add a volumeMount for the relevant container in the devfile: ... components: - name: ... container: ... volumeMounts: - name: <chosen_volume_name_from_previous_step> path: <path_where_to_mount_the_PV> ... Example 7.1. A devfile that provisions a PV for a workspace to a container When a workspace is started with the following devfile, the cache PV is provisioned to the golang container in the ./cache container path: schemaVersion: 2.1.0 metadata: name: mydevfile components: - name: golang container: image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumeMounts: - name: cache path: /.cache - name: cache volume: size: 2Gi 7.2. Requesting persistent storage in a PVC You can opt to apply a PersistentVolumeClaim (PVC) to request a PersistentVolume (PV) for your workspaces in the following cases: Not all developers of the project need the PV. The PV lifecycle goes beyond the lifecycle of a single workspace. The data included in the PV are shared across workspaces. Tip You can apply a PVC to the Dev Workspace containers even if the workspace is ephemeral and its devfile contains the controller.devfile.io/storage-type: ephemeral attribute. Prerequisites You have not started the workspace. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . A PVC is created in your user project to mount to all Dev Workspace containers. Procedure Add the controller.devfile.io/mount-to-devworkspace: true label to the PVC. Optional: Use the annotations to configure how the PVC is mounted: Table 7.1. Optional annotations Annotation Description controller.devfile.io/mount-path: The mount path for the PVC. Defaults to /tmp/ <PVC_name> . controller.devfile.io/read-only: Set to 'true' or 'false' to specify whether the PVC is to be mounted as read-only. Defaults to 'false' , resulting in the PVC mounted as read/write. Example 7.2. Mounting a read-only PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <pvc_name> labels: controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: </example/directory> 1 controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi 2 storageClassName: <storage_class_name> 3 volumeMode: Filesystem 1 The mounted PV is available at </example/directory> in the workspace. 2 Example size value of the requested storage. 3 The name of the StorageClass required by the claim. Remove this line if you want to use a default StorageClass.
[ "components: - name: <chosen_volume_name> volume: size: <requested_volume_size> G", "components: - name: container: volumeMounts: - name: <chosen_volume_name_from_previous_step> path: <path_where_to_mount_the_PV>", "schemaVersion: 2.1.0 metadata: name: mydevfile components: - name: golang container: image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumeMounts: - name: cache path: /.cache - name: cache volume: size: 2Gi", "oc label persistentvolumeclaim <PVC_name> \\ controller.devfile.io/mount-to-devworkspace=true", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <pvc_name> labels: controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: </example/directory> 1 controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi 2 storageClassName: <storage_class_name> 3 volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/user_guide/requesting-persistent-storage-for-workspaces
13.2. Using and Caching Credentials with SSSD
13.2. Using and Caching Credentials with SSSD The System Security Services Daemon (SSSD) provides access to different identity and authentication providers. 13.2.1. About SSSD Most system authentication is configured locally, which means that services must check with a local user store to determine users and credentials. What SSSD does is allow a local service to check with a local cache in SSSD, but that cache may be taken from any variety of remote identity providers - an LDAP directory, an Identity Management domain, Active Directory, possibly even a Kerberos realm. SSSD also caches those users and credentials, so if the local system or the identity provider go offline, the user credentials are still available to services to verify. SSSD is an intermediary between local clients and any configured data store. This relationship brings a number of benefits for administrators: Reducing the load on identification/authentication servers. Rather than having every client service attempt to contact the identification server directly, all of the local clients can contact SSSD which can connect to the identification server or check its cache. Permitting offline authentication. SSSD can optionally keep a cache of user identities and credentials that it retrieves from remote services. This allows users to authenticate to resources successfully, even if the remote identification server is offline or the local machine is offline. Using a single user account. Remote users frequently have two (or even more) user accounts, such as one for their local system and one for the organizational system. This is necessary to connect to a virtual private network (VPN). Because SSSD supports caching and offline authentication, remote users can connect to network resources by authenticating to their local machine and then SSSD maintains their network credentials. Additional Resources While this chapter covers the basics of configuring services and domains in SSSD, this is not a comprehensive resource. Many other configuration options are available for each functional area in SSSD; check out the man page for the specific functional area to get a complete list of options. Some of the common man pages are listed in Table 13.1, "A Sampling of SSSD Man Pages" . There is also a complete list of SSSD man pages in the "See Also" section of the sssd(8) man page. Table 13.1. A Sampling of SSSD Man Pages Functional Area Man Page General Configuration sssd.conf(8) sudo Services sssd-sudo LDAP Domains sssd-ldap Active Directory Domains sssd-ad sssd-ldap Identity Management (IdM or IPA) Domains sssd-ipa sssd-ldap Kerberos Authentication for Domains sssd-krb5 OpenSSH Keys sss_ssh_authorizedkeys sss_ssh_knownhostsproxy Cache Maintenance sss_cache (cleanup) sss_useradd, sss_usermod, sss_userdel, sss_seed (user cache entry management)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/SSSD-Introduction
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS): Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . 1.1. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy.
[ "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/preparing_to_deploy_openshift_data_foundation
Chapter 56. Protobuf Deserialize Action
Chapter 56. Protobuf Deserialize Action Deserialize payload to Protobuf 56.1. Configuration Options The following table summarizes the configuration options available for the protobuf-deserialize-action Kamelet: Property Name Description Type Default Example schema * Schema The Protobuf schema to use during serialization (as single-line) string "message Person { required string first = 1; required string last = 2; }" Note Fields marked with an asterisk (*) are mandatory. 56.2. Dependencies At runtime, the protobuf-deserialize-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:kamelet camel:core camel:jackson-protobuf 56.3. Usage This section describes how you can use the protobuf-deserialize-action . 56.3.1. Knative Action You can use the protobuf-deserialize-action Kamelet as an intermediate step in a Knative binding. protobuf-deserialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: "message Person { required string first = 1; required string last = 2; }" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-deserialize-action properties: schema: "message Person { required string first = 1; required string last = 2; }" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 56.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 56.3.1.2. Procedure for using the cluster CLI Save the protobuf-deserialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f protobuf-deserialize-action-binding.yaml 56.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 56.3.2. Kafka Action You can use the protobuf-deserialize-action Kamelet as an intermediate step in a Kafka binding. protobuf-deserialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: "message Person { required string first = 1; required string last = 2; }" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-deserialize-action properties: schema: "message Person { required string first = 1; required string last = 2; }" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 56.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 56.3.2.2. Procedure for using the cluster CLI Save the protobuf-deserialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f protobuf-deserialize-action-binding.yaml 56.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 56.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/protobuf-deserialize-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\": \"John\", \"last\":\"Doe\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: \"message Person { required string first = 1; required string last = 2; }\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-deserialize-action properties: schema: \"message Person { required string first = 1; required string last = 2; }\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f protobuf-deserialize-action-binding.yaml", "kamel bind --name protobuf-deserialize-action-binding timer-source?message='{\"first\":\"John\",\"last\":\"Doe\"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\": \"John\", \"last\":\"Doe\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: \"message Person { required string first = 1; required string last = 2; }\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-deserialize-action properties: schema: \"message Person { required string first = 1; required string last = 2; }\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f protobuf-deserialize-action-binding.yaml", "kamel bind --name protobuf-deserialize-action-binding timer-source?message='{\"first\":\"John\",\"last\":\"Doe\"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/protobuf-deserialize-action
6.14. Red Hat Virtualization 4.4 General Availability (ovirt-4.4.1)
6.14. Red Hat Virtualization 4.4 General Availability (ovirt-4.4.1) 6.14.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1061569 Previously, if you requested multiple concurrent network changes on a host, some requests were not handled due to a 'reject on busy' service policy. The current release fixes this issue with a new service policy: If resources are not available on the server to handle a request, the host queues the request for a configurable period. If server resources become available within this period, the server handles the request. Otherwise, it rejects the request. There is no guarantee for the order in which queued requests are handled. BZ# 1437559 When a virtual machine is loading, the Manager machine sends the domain XML with a NUMA Configuration CPU list containing the current CPU IDs. As a result, the libvirt/QEMU issued a warning that the NUMA Configuration CPU list is incomplete, and should contain IDs for all of the virtual CPUs. In this release, the warning no longer appears in the log. BZ# 1501798 Previously, using ovirt-engine-rename did not handle the OVN provider correctly. This caused bad IP address and hostname configurations, which prevented adding new hosts and other related issues. The current release fixes this issue. Now, ovirt-engine-rename handles ovirt-provider-ovn correctly, resolving the issues. BZ# 1569593 When deploying the self-hosted engine on a host, the Broker and Agent Services are brought down momentarily. When the VDSM service attempted to send a get_stats message before the services are restarted, the communication failed and the VDSM logged an error message. In this release, such events now result in a warning, and are not flagged or logged as errors. BZ# 1569926 Previously, commands trying to access an unresponsive NFS storage domain remained blocked for 20-30 minutes, which had significant impacts. This was caused by the non-optimal values of the NFS storage timeout and retry parameters. The current release fixes this issue: It changes these parameter values so commands to a non-responsive NFS storage domain fail within one minute. BZ# 1573600 Previously, importing a virtual machine (VM) from a snapshot that included the memory disk failed if you imported it to a storage domain that is different from the storage domain where the snapshot was created. This happened because the memory disk depended on the storage domain remaining unchanged. The current release fixes this issue. Registration of the VM with its memory disks succeeds. If the memory disk is not in the RHV Manager database, the VM creates a new one. BZ# 1583328 Previously, a custom scheduler policy was used without the HostDevice filter. Consequently, the virtual machine was scheduled on an unsupported host, causing a null pointer exception. With this update, some filter policy units are now mandatory, including HostDevice . These filter policy units are always active, cannot be disabled, and they are no longer visible in the UI or API. These filters are mandatory: Compatibility-Version CPU-Level CpuPinning HostDevice PinToHost VM leases ready BZ# 1585986 Previously, if you lowered the cluster compatibility version, the change did not propagate to the self-hosted engine virtual machine. As a result, the self-hosted engine virtual machine was not compatible with the new cluster version; you could not start or migrate it to another host in the cluster. The current release fixes this issue: The lower cluster compatibility version propagates to the self-hosted engine virtual machine; you can start and migrate it. BZ# 1590911 Previously, if two or more templates had the same name, selecting any of these templates displayed the same details from only one of the templates. This happened because the Administration Portal identified the selected template using a non-unique template name. The current release fixes this issue by using the template ID, which is unique, instead. BZ# 1596178 Previously, the VM Portal was inconsistent in how it displayed pool cards. After a user took all of the virtual machines from them, the VM Portal removed automatic pool cards but continued displaying manual pool cards. The current release fixes this issue: VM Portal always displays a pool card, and the card has a new label that shows how many virtual machines the user can take from the pool. BZ# 1598266 When a system had many FC LUNs with many paths per LUN, and a high I/O load, scanning of FC devices became slow, causing timeouts in monitoring VM disk size, and making VMs non-responsive. In this release, FC scans have been optimized for speed, and VMs are much less likely to become non-responsive. BZ# 1612152 Previously, Virtual Data Optimizer (VDO) statistics were not available for VDO volumes with an error, so VDO monitoring from VDSM caused a traceback. This update fixes the issue by correctly handling the different outputs from the VDO statistics tool. BZ# 1634742 Previously, if you decided to redeploy RHV Manager as a hosted engine, running the ovirt-hosted-engine-cleanup command did not clean up the /etc/libvirt/qemu.conf file correctly. Then, the hosted engine redeployment failed to restart the libvirtd service because libvirtd-tls.socket remained active. The current release fixes this issue. You can run the cleanup tool and redeploy the Manager as a hosted engine. BZ# 1639360 Previously, mixing the Logical Volume Manager (LVM) activation and deactivation commands with other commands caused possible undefined LVM behavior and warnings in the logs. The current release fixes this issue. It runs the LVM activation and deactivation commands separately from other commands. This produces resulting well-defined LVM behavior and clear errors in case of failure. BZ# 1650417 Previously, if a host failed and if the RHV Manager tried to start the high-availability virtual machine (HA VM) before the NFS lease expired, OFD locking caused the HA VM to fail with the error, "Failed to get "write" lock Is another process using the image?." If the HA VM failed three times in a row, the Manager could not start it again, breaking the HA functionality. The current release fixes this issue. RHV Manager would continue to try starting the VM even after three failures (the frequency of the attempts decreases over time). Eventually, once the lock expires, the VM would be started. BZ# 1650505 Previously, after increasing the cluster compatibility version of a cluster with virtual machines that had outstanding configuration changes, those changes were reverted. The current release fixes this issue. It applies both the outstanding configuration changes and the new cluster compatibility version to the virtual machines. BZ# 1654555 Previously the / filesystem automatically grew to fit the whole disk, and the user could not increase the size of /var or /var/log . This happened because, if a customer specified a disk larger than 49 GB while installing the Hosted Engine, the whole logical volume was allocated to the root ( / ) filesystem. In contrast, for the RHVM machine, the critical filesystems are /var and /var/log . The current release fixes this issue. Now, the RHV Manager appliance is based on the logical volume manager (LVM). At setup time, its PV and VG are automatically extended, but the logical volumes (LVs) are not. As a result, after installation is complete, you can extend all of the LVs in the Manager VM using the free space in the VG. BZ# 1656621 Previously, an imported VM always had 'Cloud-Init/Sysprep' turned on. The Manager created a VmInit even when one did not exist in the OVF file of the OVA. The current release fixes this issue: The imported VM only has 'Cloud-Init/Sysprep' turned on if the OVA had it enabled. Otherwise, it is disabled. BZ# 1658101 In this release, when updating a Virtual Machine using a REST API, not specifying the console value now means that the console state should not be changed. As a result, the console keeps its state. BZ# 1659161 Previously, changing the template version of a VM pool created from a delete-protected VM made the VM pool non-editable and unusable. The current release fixes this issue: It prevents you from changing the template version of the VM pool whose VMs are delete-protected and fails with an error message. BZ# 1659574 Previously, after upgrading RHV 4.1 to a later version, high-availability virtual machines (HA VMs) failed validation and did not run. To run the VMs, the user had to reset the lease Storage Domain ID. The current release fixes this issue: It removes the validation and regenerates the lease information data when the lease Storage Domain ID is set. After upgrading RHV 4.1, HA VMs with lease Storage Domain IDs run. BZ# 1660071 Previously, when migrating a paused virtual machine, the Red Hat Virtualization Manager did not always recognize that the migration completed. With this update, the Manager immediately recognizes when migration is complete. BZ# 1664479 When you use the engine ("Master") to set the high-availability host running the engine virtual machine (VM) to maintenance mode, the ovirt-ha-agent migrates the engine VM to another host. Previously, in specific cases, such as when these VMs had an old compatibility version, this type of migration failed. The current release fixes this problem. BZ# 1670102 Previously, to get the Cinder Library (cinderlib), you had to install the OpenStack repository. The current release fixes this issue by providing a separate repository for cinderlib. To enable the repository, enter: To install cinderlib, enter: BZ# 1676582 Previously, the user interface used the wrong unit of measure for the VM memory size in the VM settings of Hosted Engine deployment via cockpit: It showed MB instead of MiB. The current release fixes this issue: It uses MiB as the unit of measure. BZ# 1678007 Before this update, you could import a virtual machine from a cluster with a compatibility version lower than the target cluster, and the virtual machine's cluster version would not automatically update to the new cluster's compatibility version, causing the virtual machine's configuration to be invalid. Consequently, you could not run the virtual machine without manually changing its configuration. With this update, the virtual machine's cluster version automatically updates to the new cluster's compatibility version. You can import virtual machines from cluster compatibility version 3.6 or newer. BZ# 1678262 Previously, when you created a virtual machine from a template, the BIOS type of defined in the template did not take effect on the new virtual machine. Consequently, the BIOS type on the new virtual machine was incorrect. With this update, this bug is fixed, so the BIOS type on the new virtual machine is correct. BZ# 1679471 Previously, the console client resources page showed truncated titles for some locales. The current release fixes this issue. It re-arranges the console client resources page layout as part of migrating from Patternfly 3 to Patternfly 4 and fixes the truncated titles. BZ# 1680368 Previously, the slot parameter was parsed as a string, causing disk rollback to fail during the creation of a virtual machine from a template when using an Ansible script. Note that there was no such failure when using the Administration Portal to create a virtual machine from a template. With this update, the slot parameter is parsed as an int, so disk rollback and virtual machine creation succeed. BZ# 1684266 When a large disk is converted as part of VM export to OVA, it takes a long time. Previously, the SSH channel the export script timed out and closed due to the long period of inactivity, leaving an orphan volume. The current release fixes this issue: Now, the export script adds some traffic to the SSH channel during disk conversion to prevent the SSH channel from being closed. BZ# 1684537 Previously, a virtual machine could crash with the message "qemu-kvm: Failed to lock byte 100" during a live migration with storage problems. The current release fixes this issue in the underlying platform so the issue no longer happens. BZ# 1685034 after_get_caps is a vdsm hook that periodically checks for a database connection. This hook requires ovs-vswitchd to be running in order to execute properly. Previously, the hook ran even when ovs-vswitchd was disabled, causing an error to be logged to /var/log/messages, eventually flooding it. Now, when the hook starts, it checks if the OVS service is available, and bails out of the hook when the service is not available, so the log is no longer flooded with these error messages. BZ# 1686575 Previously, the self-hosted engine high availability host's management network was configured during deployment. The VDSM took over the Network Manager and configured the selected network interface during initial deployment, while the Network Manager remained disabled. During restore, there was no option to attach additional (non-default) networks, and the restore process failed because the high-availability host had no connectivity to networks previously configured by the user that were listed in the backup file. In this release, the user can pause the restore process, manually add the required networks, and resume the restore process to completion. BZ# 1688052 Previously, the gluster fencing policy check failed due to a non-iterable object and threw an exception. The code also contained a minor typo. The current release fixes these issues. BZ# 1688159 Previously, when a virtual machine migration entered post-copy mode and remained in that mode for a long time, the migration sometimes failed and the migrated virtual machine was powered off. In this release, post-copy migrations are maintained to completion. BZ# 1692592 Previously, items with number ten and higher on the BIOS boot menu were not assigned sequential indexes. This made it difficult to select those items. The current release fixes this issue. Now, items ten and higher are assigned letter indexes. Users can select those items by entering the corresponding letter. BZ# 1693628 Previously, the state of the user session was not saved correctly in the Engine database, causing many unnecessary database updates to be performed. The current release fixes this issue: Now, the user session state is saved correctly on the first update. BZ# 1693813 Previously, if you updated the Data Center (DC) level, and the DC had a VM with a lower custom compatibility level than the DC's level, the VM could not resume due to a "not supported custom compatibility version." The current release fixes this issue: It validates the DC before upgrading the DC level. If the validation finds VMs with old custom compatibility levels, it does not upgrade the DC level: Instead, it displays "Cannot update Data Center compatibility version. Please resume/power off the following VMs before updating the Data Center." BZ# 1696313 Before this update, some architecture-specific dependencies of VDSM were moved to safelease in order to keep VDSM architecture-agnostic. With this update, those dependencies have been returned to VDSM and removed from safelease. BZ# 1698102 Previously, engine-setup did not provide enough information about configuring ovirt-provider-ovn. The current release fixes this issue by providing more information in the engine-setup prompt and documentation that helps users understand their choice and follow up actions. BZ# 1700623 Previously, moving a disk resulted in the wrong SIZE/CAP key in the volume metadata. This happened because creating a volume that had a parent overwrote the size of the newly-created volume with the parent size. As a result, the volume metadata contained the wrong volume size value. The current release fixes this issue, so the volume metadata contains the correct value. BZ# 1703112 In some scenarios, the PCI address of a hotplugged SR-IOV vNIC was overwritten by an empty value, and as a result, the NIC name in the virtual machine was changed following a reboot. In this release, the vNIC PCI address is stored in the database and the NIC name persists following a virtual machine reboot. BZ# 1703428 Previously, when importing a KVM into Red Hat Virtualization, "Hardware Clock Time Offset" was not set. As a result, the Manager machine did not recognize the guest agent installed in the virtual machine. In this release, the Manager machine recognizes the guest agent on a virtual machine imported from KVM, and the "Hardware Clock Time Offset" won't be null. BZ# 1707225 Before this update, there was no way to backup and restore the Cinderlib database. With this update, the engine-backup command includes the Cinderlib database. For example, to backup the engine including the Cinderlib database: To restore this same database: BZ# 1711902 In a Red Hat Virtualization (RHV) environment with VDSM version 4.3 and Manager version 4.1, the DiskTypes are parsed as int values. However, in an RHV environment with Manager version > 4.1, the DiskTypes are parsed as strings. That compatibility mismatch produced an error: "VDSM error: Invalid parameter: 'DiskType=2'". The current release fixes this issue by changing the string value back to an int, so the operation succeeds with no error. BZ# 1713724 Previously, converting a storage domain to the V5 format failed when, following an unsuccessful delete volume operation, partly-deleted volumes with cleared metadata remained in the storage domain. The current release fixes this issue. Converting a storage domain succeeds even when partly-deleted volumes with cleared metadata remain in the storage domain. BZ# 1714528 Previously, some HTML elements in Cluster Upgrade dialog had missing or duplicated IDs, which impaired automated UI testing. The current release fixes this issue. It provides missing IDs and removes duplicates to improve automated UI testing. BZ# 1715393 Previously, if you changed a virtual machine's BIOS Type chipset from one of the Q35 options to Cluster default or visa versa while USB Policy or USB Support was Enabled , the change did not update the USB controller to the correct setting. The current release fixes this issue. The same actions update the USB controller correctly. BZ# 1717390 Previously, if you hot-unplugged a virtual machine interface shortly after booting the virtual machine, the unplugging action failed with an error. When this happened, it was because VM monitoring did not report the alias of the interface soon enough; and VDSM could not identify the vNIC to unplug. The current release fixes this issue: If if the alias is missing during hot unplug, the Engine generates one on the fly. BZ# 1718141 Previously, the python3-ovirt-engine-sdk4 package did not include the all_content attribute of the HostNicService and HostNicsService. As a result, this attribute was effectively unavailable to python3-ovirt-engine-sdk4 users. The current release fixes this issue by adding the all_content parameter to the python3-ovirt-engine-sdk4. BZ# 1719990 Previously, when creating a virtual machine with the French language selected, the Administration Portal did not accept the memory size using the French abbreviation Mo instead of MB. After setting the value with the Mo suffix, the value was reset to zero. With this update, the value is parsed correctly and the value remains as entered. BZ# 1720747 Previously, if ovirt-ha-broker restarted while the RHV Manager (engine) was querying the status of the self-hosted engine cluster, the query could get stuck. If that happened, the most straightforward workaround was to restart the RHV Manager. This happened because the RHV Manager periodically checked the status of the self-hosted engine cluster by querying the VDSM daemon on the cluster host. With each query, VDSM checked the status of the ovirt-ha-broker daemon over a Unix Domain Socket. The communication between VDSM and ovirt-ha-broker wasn't enforcing a timeout. If ovirt-ha-broker was restarting, such as trying to recover from a storage issue, the VDSM request could get lost, causing VDSM and the RHV Manager to wait indefinitely. The current release fixes this issue. It enforces a timeout in the communication channel between the VDSM and ovirt-ha-broker. If ovirt-ha-broker cannot reply to VDSM within a certain timeout, VDSM reports a self-hosted engine error to the RHV Manager. BZ# 1720795 Previously, the Manager searched for guest tools only on ISO domains, not data domains. The current release fixes this issue: Now, if the Manager detects a new tool on data domains or ISO domains, it displays a mark for the Windows VM. BZ# 1721804 Before this update libvirt did not support launching virtual machines with names ending with a period, even though the Manager did. This prevented launching virtual machines with names ending with a period. With this update, the Administration Portal and the REST API now prevent ending the name of a virtual machine with a period, resolving the issue. BZ# 1722854 Previously, while VDSM was starting, the definition of the network filter vdsm-no-mac-spoofing was removed and recreated to ensure the filter was up to date. This occasionally resulted in a timeout during the start of VDSM. The current release fixes this issue. Instead of removing and recreating of the filter, the vdsm-no-mac-spoofing filter is updated during the start of the VDSM. This update takes less than a second, regardless of the number of vNICs using this filter. BZ# 1723668 Previously, during virtual machine shut down, the VDSM command Get Host Statistics occasionally failed with an Internal JSON-RPC error {'reason': '[Errno 19] vnet<x> is not present in the system'} . This failure happened because the shutdown could make an interface disappear while statistics were being gathered. The current release fixes this issue. It prevents such failures from being reported. BZ# 1724002 Previously, cloud-init could not be used on hosts with FIPS enabled. With this update, cloud-init can be used on hosts with FIPS enabled. BZ# 1724959 Previously, the About dialog in the VM Portal provided a link to GitHub for reporting issues. However, RHV customers should use the Customer Portal to report issues. The current release fixes this issue. Now, the About dialog provides a link to the Red Hat Customer Portal. BZ# 1728472 Previously, the RHV Manager reported network out of sync because the Linux kernel applied the default gateway IPv6 router advertisements, and the IPv6 routing table was not configured by RHV. The current release fixes this issue. The IPv6 routing table is configured by RHV. NetworkManager manages the default gateway IPv6 router advertisements. BZ# 1729511 During installation or upgrade to Red Hat Virtualization 4.3, engine-setup failed if the PKI Organization Name in the CA certificate included non-ASCII characters. In this release, the upgrade engine-setup process completes successfully. BZ# 1729811 Previously, the guest_cur_user_name of the vm_dynamic database table was limited to 255 characters, not enough for more than approximately 100 user names. As a result, when too many users logged in, updating the table failed with an error. The current release fixes this issue by changing the field type from VARCHAR(255) to TEXT. BZ# 1730264 Previously, enabling port mirroring on networks whose user-visible name was longer than 15 characters failed. This happened because port mirroring tried to use this long user-visible network name, which was not a valid network name. The current release fixes this issue. Now, instead of the user-visible name, port mirroring uses the VDSM network name. Therefore, you can enable port mirroring for networks whose user-visible name is longer than 15 characters. BZ# 1731212 Previously, the RHV landing page did not support scrolling. With lower screen resolutions, some users could not use the log in menu option for the Administration Portal or VM Portal. The current release fixes this issue by migrating the landing and login pages to PatternFly 4, which displays horizontal and vertical scroll bars when needed. Users can access the entire screen regardless of their screen resolution or zoom setting. BZ# 1731590 Before this update, previewing a snapshot of a virtual machine, where the snapshot of one or more of the machine's disks did not exist or had no image with active set to "true", caused a null pointer exception to appear in the logs, and the virtual machine remained locked. With this update, before a snapshot preview occurs, a database query checks for any damaged images in the set of virtual machine images. If the query finds a damaged image, the preview operation is blocked. After you fix the damaged image, the preview operation should work. BZ# 1733227 Previously, an issue with the button on External Provider Imports prevented users from importing virtual machines (VMs) from external providers such as VMware. The current release fixes this issue and users can import virtual machines from external providers. BZ# 1733843 Previously, exporting a virtual machine (VM) to an Open Virtual Appliance (OVA) file archive failed if the VM was running on the Host performing the export operation. The export process failed because doing so created a virtual machine snapshot, and while the image was in use, the RHV Manager could not tear down the virtual machine. The current release fixes this issue. If the VM is running, the RHV Manager skips tearing down the image. Exporting the OVA of a running VM succeeds. BZ# 1737234 Previously, if you sent the RHV Manager an API command to attach a non-existing ISO to a VM, it attached an empty CD or left an existing one intact. The current release fixes this issue. Now, the Manager checks if the specified ISO exists, and throws an error if it doesn't. BZ# 1739377 Previously, creating a snapshot did not correctly save the Cloud-Init/Sysprep settings for the guest OS. If you tried to clone a virtual machine from the snapshot, it did not have valid values to initialize the guest OS. The current release fixes this issue. Now, creating a snapshot correctly saves the the Cloud-Init/Sysprep configuration for the guest OS. BZ# 1741792 Previously, using LUKS alone was a problem because the RHV Manager could reboot a node using Power Management commands. However, the node would not reboot because it was waiting for the user to enter a decrypt/open/unlock passphrase. This release fixes the issue by adding clevis RPMs to the Red Hat Virtualization Host (RHVH) image. As a result, a Manager can automatically unlock/decrypt/open an RHVH using TPM or NBDE. BZ# 1743269 Previously, upgrading RHV from version 4.2 to 4.3 made the 10-setup-ovirt-provider-ovn.conf file world-readable. The current release fixes this issue, so the file has no unnecessary permissions. BZ# 1743296 Before this update, selecting templates or virtual machines did not display the proper details when templates or virtual machines with the same name were saved in different Data Centers, because the machine's name, instead of its GUID, was used to fetch the machine's details. With this update, the query uses the virtual machine's GUID, and the correct details are displayed. BZ# 1745384 Previously, trying to update the IPv6 gateway in the Setup Networks dialog removed it from the network attachment. The current release fixes this issue: You can update the IPv6 gateway if the related network has the default route role. BZ# 1746699 Before this update,copying disks created by virt-v2v failed with an Invalid Parameter Exception, Invalid parameter:'DiskType=1'. With this release, copying disks succeeds. BZ# 1746700 The ovirt-host-deploy package uses otopi. Previously, otopi could not handle non-ASCII text in /root/.ssh/authorized_keys and failed with an error: 'ascii' codec can't decode byte 0xc3 in position 25: ordinal not in range(128). The new release fixes this issue by adding support for Unicode characters to otopi. BZ# 1749347 Previously, systemd units from failed conversions were not removed from the host. These could cause collisions and prevent subsequent conversions from starting because the service name was already "in use." The current release fixes this issue. If the conversion fails, the units are explicitly removed so they cannot interfere with subsequent conversions. BZ# 1749630 Previously, the Administration Portal showed very high memory usage for a host with no virtual machines running because it was not counting slab reclaimable memory. As a result, virtual machines could not be migrated to that host. The current release fixes that issue. The free host memory is evaluated correctly. BZ# 1750212 Previously, when you tried to delete the snapshot of a virtual machine with a LUN disk, RHV parsed its image ID incorrectly and used "mapper" as its value. This issue produced a null pointer error (NPE) and made the deletion fail. The current release fixes this issue, so the image ID parses correctly and the deletion succeeds. BZ# 1750482 Previously, when you used the VM Portal to configure a virtual machine (VM) to use Windows OS, it failed with the error, "Invalid time zone for given OS type." This happened because the VM's timezone for Windows OS was not set properly. The current release fixes this issue. If the time zone in the VM template or VM is not compatible with the VM OS, it uses the default time zone. For Windows, this default is "GMT Standard Time". For other OSs, it is "Etc/GMT". Now, you can use the VM Portal to configure a VM to use Windows OS. BZ# 1751215 Previously, after upgrading to RHV version 4.1 to 4.3, the Graphical Console for the self-hosted engine virtual machine was locked because the default display in version 4.1 is VGA. The current release fixes this issue. While upgrading to version 4.3, it changes the default display to VNC. As a result, the Graphical Console for the Hosted-Engine virtual machine is changeable. BZ# 1754363 With this release, the number of DNS configuration SQL queries that the Red Hat Virtualization Manager runs is significantly reduced, which improves the Manager's ability to scale. BZ# 1756244 Previously, in an IPv4-only host with a .local FQDN, the deployment kept looping searching for an available IPv6 prefix until it failed. This was because the hosted-engine setup picked a link-local IP address for the host. The hosted-engine setup could not ensure that the Engine and the host are on the same subnet when one of them used a link-local address. The Engine must not use on a link-local address to be reachable through a routed network. The current release fixes this issue: Even if the hostname is resolved to a link-local IP address, the hosted-engine setup ignores the link-local IP addresses and tries to use another IP address as the address for the host. The hosted-engine can deploy on hosts, even if the hostname is resolved to a link-local address. BZ# 1759388 Previously, ExecStopPost was present in the VDSM service file. This meant that, after stopping VDSM, some of its child processes could continue and, in some cases, lead to data corruption. The current release fixes this issue. It removes ExecStopPost from the VDSM service. As a result, terminating VDSM also stops its child processes. BZ# 1763084 Previously, some migrations failed because of invalid host certificates whose Common Name (CN) contained an IP address, and because using the CN for hostname matching is obsolete. The current release fixes this issue by filling-in the Subject Alternative Name (SAN) during host installation, host upgrade, and certificate enrolment. Periodic certificate validation includes the SAN field and raises an error if it is not filled. BZ# 1764943 Previously, while creating virtual machine snapshots, if the VDSM's command to freeze a virtual machines' file systems exceeded the snapshot command's 3-minute timeout period, creating snapshots failed, causing virtual machines and disks to lock. The current release adds two key-value pairs to the engine configuration. You can configure these using the engine-config tool: Setting LiveSnapshotPerformFreezeInEngine to true enables the Manager to freeze VMs' file systems before it creates a snapshot of them. Setting LiveSnapshotAllowInconsistent to true enables the Manager to continue creating snapshots if it fails to freeze VMs' file systems. BZ# 1769339 Previously, extending a floating QCOW disk did not work because the user interface and REST API ignored the getNewSize parameter. The current release fixes this issue and validates the settings so you can extend a floating QCOW disk. BZ# 1769463 Previously, in a large environment, the oVirt's REST API's response to a request for the cluster list was slow: This slowness was caused by processing a lot of surplus data from the engine database about out-of-sync hosts on the cluster which eventually was not included in the response. The current release fixes this issue. The query excludes the surplus data, and the API responds quickly. BZ# 1770237 Previously, the virtual machine (VM) instance type edit and create dialog displayed a vNIC profile editor. This item gave users the impression they could associate a vNIC profile with an instance type, which is not valid. The current release fixes this issue by removing the vNIC profile editor from the instance edit and create dialog. BZ# 1770889 Previously, VDSM did not send the Host.getStats message: It did not convert the description field of the Host.getStats message to utf-8, which caused the JSON layer to fail. The current release fixes this issue. It converts the description field to utf-8 so that VDSM can send the Host.getStats message. BZ# 1775248 Previously, issues with aliases for USB, channel, and PCI devices generated WARN and ERROR messages in engine.log when you started virtual machines. RHV Manager omitted the GUID from the alias of the USB controller device. This information is required later to correlate the alias with the database instance of the USB device. As a result, duplicate devices were being created. Separately, channel and PCI devices whose aliases did not contain GUIDs also threw exceptions and caused warnings. The current release fixes these issues. It removes code that was prevented the USB controller device from sending the correct alias when launching the VM. The GUID is added to the USB controller devices' aliases within the domain XML. It also filters channel and PCI controllers from the GUID conversion code to avoid printing exception warnings for these devices. BZ# 1777954 Previously, for the list of virtual machine templates in the Administration Portal, a paging bug hid every other page, and the templates on those pages, from view. The current release fixes this issue and displays every page of templates correctly. BZ# 1781095 Before this update, the engine-cleanup command enabled you to do a partial cleanup by prompting you to select which components to remove, even though partial cleanup is not supported. This resulted in a broken system. With this update, the prompt no longer appears and only full cleanup is possible. BZ# 1783180 Previously, a problem with AMD EPYC CPUs that were missing the virt-ssbd CPU flag prevented Hosted Engine installation. The current release fixes this issue. BZ# 1783337 Previously, the rename tool did not renew the websocketproxy certificates and did not update the value of WebSocketProxy in the engine configuration. This caused issues such as the VNC browser console not being able to connect to the server. The current release fixes this issue. Now, ovirt-engine-rename handles the websocket proxy correctly. It regenerates the certificate, restarts the service, and updates the value of WebSocketProxy . BZ# 1783815 Previously, if a virtual machine (VM) was forcibly shut down by SIGTERM, in some cases the VDSM did not handle the libvirt shutdown event that contained information about why the VM was shut down and evaluated it as if the guest had initiated a clean shutdown. The current release fixes this issue: VDSM handles the shutdown event, and the Manager restarts the high-availability VMs as expected. BZ# 1784049 Previously, if you ran a virtual machine (VMs) with an old operating system such as RHEL 6 and the BIOS Type was a Q35 Chipset, it caused a kernel panic. The current release fixes this issue. If a VM has an old operating system and the BIOS Type is a Q35 Chipset, it uses the VirtIO-transitional model for some devices, which enables the VM to run normally. BZ# 1784398 Previously, because of a UI regression bug in the Administration Portal, you could not add system permissions to a user. For example, clicking Add System Permissions , selecting a Role to assign , and clicking OK did not work. The current release fixes so you can add system permissions to a user. BZ# 1785364 Previously, when restoring a backup, engine-setup did not restart ovn-northd, so the ssl/tls configuration was outdated. With this update ,the the restored ssl/tls ovn-northd reloads the restored ssl/tls configuration. BZ# 1785615 Previously, trying to mount an ISO domain (File Change CD) within the Console generated a "Failed to perform 'Change CD' operation" error due to the deprecation of REST API v3. The current release fixes this issue: It upgrades Remote Viewer to use REST API v4 so mounting an ISO domain within the console works. BZ# 1788424 Previously, if you disabled the virtio-scsi drive and imported the virtual machine that had a direct LUN attached, the import validation failed with a "Cannot import VM. VirtIO-SCSI is disabled for the VM" error. This happened because the validation tried to verify that the virtio-scsi drive was still attached to the VM. The current release fixes this issue. If the Disk Interface Type is not virtio-scsi, the validation does not search for the virtio-scsi drive. Disk Interface Type uses an alternative driver, and the validation passes. BZ# 1788783 Previously, when migrating a virtual machine, information about the running guest agent was not always passed to the destination host. In these cases, the migrated virtual machine on the destination host did not receive an after_migration life cycle event notification. This update fixes this issue. The after_migration notification works as expected now. BZ# 1793481 Before this update, you could enable a raw format disk for incremental backup from the Administration Portal or using the REST API, but because incremental backup does not support raw format disks, the backup failed. With this update, you can only enable incremental backup for QCOW2 format disks, preventing inclusion of raw format disks. BZ# 1795886 Before this update, validation succeeded for an incremental backup operation that included raw format disks, even though incremental backup does not support raw format disks. With this update, validation succeeds for a full backup operation for a virtual machine with a raw format disk, but validation fails for an incremental backup operation for a virtual machine with a raw format disk. BZ# 1796811 The apache-sshd library is not bundled anymore in the rhvm-dependencies package. The apache-sshd library is now packaged in its own rpm package. BZ# 1798175 Previously, due to a regression, KVM Importing failed and threw exceptions. This was due to a missing readinto function on the StreamAdapter. The current release fixes this issue so that KVM importing works. BZ# 1798425 Previously, importing virtual machines failed when the source version variable was null. With this update, validation of the source compatibility version is removed, enabling the import to succeed even when the source version variable is null. BZ# 1801205 Previously, VM Pools set to HA could no be run. VM Pools are stateless. Nonetheless, a user could set a VM in a Pool as supporting HA, but then the VM could not be launched. The current release fixes this issue: It disables the HA checkbox so that the user can no longer set VM Pools to support HA. As a result, the user can no longer set a VM Pool to support HA. BZ# 1806276 Previously, the ovirt-provider-ovn network provider was non-functional on RHV 4.3.9 Hosted-Engine. This happened because, with FDP 20.A (bug 1791388), the OVS/OVN service no longer had the permissions to read the private SSL/TLS key file. The current release fixes this issue: It updates the private SSL/TLS key file permissions. OVS/OVN reads the key file and works as expected. BZ# 1807937 Previously, if running a virtual machine with its Run Once configuration failed, the RHV Manager would try to run the virtual machine with its standard configuration on a different host. The current release fixes this issue. Now, if Run Once fails, the RHV Manager tries to run the virtual machine with its Run Once configuration on a different host. BZ# 1808788 Previously, trying to run a VM failed with an unsupported configuration error if its configuration did not specify a numa node. This happened because the domain xml was missing its numa node section, and VMs require at least one numa node to run. The current release fixes this issue: If the user has not specified any numa nodes, the VM generates a numa node section. As a result, a VM where numa nodes were not specified launches regardless of how many offline CPUs are available. BZ# 1809875 Before this update, a problem in the per Data-Center loop collecting images information caused incomplete data for analysis for all but the last Data-Center collected. With this update, the information is properly collected for all Data-Centers, resolving the issue. BZ# 1810893 Previously, using the Administration Portal to import a storage domain omitted custom mount options for NFS storage servers. The current release fixes this issue by including the custom mount options. BZ# 1812875 Previously, when the Administration Portal was configured to use French language, the user could not create virtual machines. This was caused by French translations that were missing from the user interface. The current release fixes this issue. It provides the missing translations. Users can configure and create virtual machines while the Administration Portal is configured to use the French language. BZ# 1813028 Previously, if you exported a virtual machine (VM) as an Open Virtual Appliance (OVA) file from a host that was missing a loop device, and imported the OVA elsewhere, the resulting VM had an empty disk (no OS) and could not run. This was caused by a timing and permissions issue related to the missing loop device. The current release fixes the timing and permission issues. As a result, the VM to OVA export includes the guest OS. Now, when you create a VM from the OVA, the VM can run. BZ# 1816327 Previously, if you tried to start an already-running virtual machine (VM) on the same host, VDSM failed this operation too late and the VM on the host became hidden from the RHV Manager. The current release fixes the issue: VDSM immediately rejects attempts to start a running VM on the same host. BZ# 1816777 Previously, when initiating the console from the VM portal to noVNC, the console didn't work due to a missing 'path' parameter when initiating the console. In this release,the 'path' is not mandatory, and the noVNC console can initiate even when 'path' isn't provided. BZ# 1819299 Previously when loading a memory snapshot, the RHV Manager did not load existing device IDs. Instead, it created new IDs for each device. The Manager was unable to correlate the IDs with the devices and treated them as though they were unplugged. The current release fixes this issue. Now, the Manager consumes the device IDs and correlates them with the devices. BZ# 1819960 Previously, if you used the update template script example of the ovirt-engine-sdk to import a virtual machine or template from an OVF configuration, it failed with a null-pointer exception (NPE). This happened because the script example did not supply the Storage Pool Id and Source Storage Domain Id. The current release fixes this issue. Now, the script gets the correct ID values from the image, so importing a template succeeds. BZ# 1820140 Previously, with RHV Manager running as a self-hosted engine, the user could hotplug memory on the self-hosted engine virtual machine and exceed the physical memory of the host. In that case, restarting the virtual machine failed due to insufficient memory. The current release fixes this issue. It prevents the user from setting the self-hosted engine virtual machine's memory to exceed the active host's physical memory. You can only save configurations where the self-hosted engine virtual machine's memory is less than the active host's physical memory. BZ# 1821164 While the RHV Manager is creating a virtual machine (VM) snapshot, it can time out and fail while trying to freeze the file system. If this happens, more than one VM can write data to the same logical volume and corrupt the data on it. In the current release, you can prevent this condition by configuring the Manager to freeze the VM's guest filesystems before creating a snapshot. To enable this behavior, run the engine-configuration tool and set the LiveSnapshotPerformFreezeInEngine key-value pair to true . BZ# 1822479 Previously, when redeploying the RHV Manager as a hosted engine after cleanup, the libvirtd service failed to start. This happened because the libvirtd-tls.socket service was active. The current release fixes this issue. Now, when you run the ovirt-hosted-engine-cleanup tool, it stops the libvirtd-tls.socket service. The libvirtd service starts when you redeploy RHV Manager as a hosted engine. BZ# 1826248 Previously, the 'Host console SSO' feature did not work with python3, which is the default python on RHEL 8. The code was initially written for Python2 and was not properly modified for Python3. The current release fixes this issue: The 'Host console SSO' feature works with Python3. BZ# 1830730 Previously, if the DNS query test timed out, it did not produce a log message. The current release fixes this issue: If a DNS query times out, it produces a "DNS query failed" message in the broker.log. BZ# 1832905 In versions, engine-backup --mode=verify passed even if pg_restore emitted errors. The current release fixes this issue. The engine-backup --mode=verify command correctly fails if pg_restore emits errors. BZ# 1834523 Previously, adding or removing a smart card to a running virtual machine did not work. The current release fixes this issue. When you add or remove a smart card, it saves this change to the virtual machine's run configuration. In the Administration Portal, the virtual machine indicates that a run configuration exists, and lists "Smartcard" as a changed field. When you restart the virtual machine, it applies the new configuration to the virtual machine. BZ# 1834873 Previously, retrieving host capabilities failed for specific non-NUMA CPU topologies. The current release fixes this issue and correctly reports the host capabilities for those topologies. BZ# 1835096 Previously, if creating a live snapshot failed because of a storage error, the RHV Manager would incorrectly report that it had been successful. The current release fixes this issue. Now, if creating a snapshot fails, the Manager correctly shows that it failed. BZ# 1836609 Previously, the slot parameter was parsed as a string, causing disk rollback to fail during the creation of a virtual machine from a template when using an Ansible script. Note that there was no such failure when using the Administration Portal to create a virtual machine from a template. With this update, the slot parameter is parsed as an int, so disk rollback and virtual machine creation succeed. BZ# 1837266 Previously, if you backed up RHV Manager running as a self-hosted engine in RHV version 4.3, restoring it in RHV version 4.4 failed with particular CPU configurations. The current release fixes this issue. Now, restoring the RHV Manager with those CPU configurations succeeds. BZ# 1838439 Previously, in the beta version of RHV 4.4, after adding a host to a cluster with compatibility version 4.2, editing the cluster reset its BIOS Type from the automatically detected value to Cluster default . As a result, virtual machines could not run because a Chip Set does not exist for Cluster Default . The current release fixes this issue. It preserves the original value of BIOS Type and prevents it from being modified when you edit the cluster. As a result, you can create and run virtual machines normally after editing cluster properties. BZ# 1838493 Previously, creating a live snapshot with memory while LiveSnapshotPerformFreezeInEngine was set to True, resulted in a virtual machine file system that is frozen when previewing or committing the snapshot with memory restore. In this release, the virtual machine runs successfully after creating a preview snapshot from a memory snapshot. BZ# 1839967 Previously, running ovirt-engine-rename generated errors and failed because Python 3 renamed urlparse to urllib.parse . The current release fixes this issue. Now, ovirt-engine-rename uses urllib.parse and runs successfully. BZ# 1842260 Previously, suppose you were trying to send metrics and logs to Elasticsearch that was not on OCP: You could not set usehttps to false while also not using Elasticsearch certificates ( use_omelasticsearch_cert: false ). As a result, you could not send data to Elasticsearch without https. The current release fixes this issue. Now, you can set the variable "usehttps" as expected and send data to Elasticsearch without https. BZ# 1843089 Before this release, local storage pools were created but were not deleted during Self-Hosted Engine deployment, causing storage pool leftovers to remain. In this release, the cleanup is performed properly following Self-Hosted Engine deployment, and there are no storage pool leftovers. BZ# 1845473 Previously, exporting a virtual machine or template to an OVA file incorrectly sets its format in the OVF metadata file to "RAW". This issue causes problems using the OVA file. The current release fixes this issue. Exporting to OVA sets the format in the OVF metadata file to "COW", which represents the disk's actual format, qcow2. BZ# 1847513 When you change the cluster compatibility version, it can also update the compatibility version of the virtual machines. If the update fails, it rolls back the changes. Previously, chipsets and emulated machines were not part of the cluster update. The current release fixes this issue. Now, you can also update chipsets and emulator machines when you update the cluster compatibility version. BZ# 1849275 Previously, if the block path was unavailable for a storage block device on a host, the RHV Manager could not process host devices from that host. The current release fixes this issue. The Manager can process host devices even though a block path is missing. BZ# 1850117 Previously, the`hosted-engine --set-shared-config storage` command failed to update the hosted engine storage. With this update, the command works. BZ# 1850220 Old virtual machines that have not been restarted since user aliases were introduced in RHV version 4.2 use old device aliases created by libvirt. The current release adds support for those old device aliases and links them to the new user-aliases to prevent correlation issues and devices being unplugged. 6.14.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 854932 The REST API in the current release adds the following updatable disk properties for floating disks: For Image disks: provisioned_size, alias, description, wipe_after_delete, shareable, backup, and disk_profile. For LUN disks: alias, description and shareable. For Cinder and Managed Block disks: provisioned_size, alias, and description. See Services . BZ# 1080097 In this release, it is now possible to edit the properties of a Floating Disk in the Storage > Disks tab of the Administration Portal. For example, the user can edit the Description , Alias , and Size of the disk. BZ# 1107803 With this enhancement, oVirt uses NetworkManager and NetworkManager Stateful Configuration (nmstate) to configure host networking. The implementation used network-scripts, which are deprecated in CentOS 8. This usage of NetworkManager helps to share code with software components. As a result, oVirt integrates better with RHEL-based software. Now, for example, the Cockpit web interface can see the host networking configuration, and oVirt can read the network configuration created by the Anaconda installer. BZ# 1179273 The VDSM's ssl_protocol , ssl_excludes , and ssl_ciphers config options have been removed. For details, see: Consistent security by crypto policies in Red Hat Enterprise Linux 8 . To fine-tune your crypto settings, change or create your crypto policy. For example, for your hosts to communicate with legacy systems that still use insecure TLSv1 or TLSv1.1, change your crypto policy to LEGACY with: BZ# 1306586 The floppy device has been replaced by a CDROM device for sysprep installation of Compatibility Versions 4.4 and later. BZ# 1325468 After a high-availability virtual machine (HA VM) crashes, the RHV Manager tries to restart it indefinitely. At first, with a short delay between restarts. After a specified number of failed retries, the delay is longer. Also, the Manager starts crashed HA VMs in order of priority, delaying lower-priority VMs until higher-priority VMs are 'Up.' The current release adds new configuration options: RetryToRunAutoStartVmShortIntervalInSeconds , the short delay, in seconds. The default value is 30 . RetryToRunAutoStartVmLongIntervalInSeconds , the long delay, in seconds. The default value is 1800 , which equals 30 minutes. NumOfTriesToRunFailedAutoStartVmInShortIntervals , the number of restart tries with short delays before switching to long delays. The default value is 10 tries. MaxTimeAutoStartBlockedOnPriority , the maximum time, in minutes, before starting a lower-priority VM. The default value is 10 minutes. BZ# 1358501 Network operations that span multiple hosts may take a long time. This enhancement shows you when these operations finish: It records start and end events in the Events Tab of the Administration Portal and engine.log. If you use the Administration Portal to trigger the network operation, the portal also displays a pop-up notification when the operation is complete. BZ# 1388599 In the default virtual machine template, the current release changes the default setting for "VM Type" to "server." Previously, it was "desktop." BZ# 1403677 With this update, you can connect to a Gluster storage network over IPv6, without the need for IPv4. BZ# 1427717 The current release adds the ability for you to select affinity groups while creating or editing a virtual machine (VM) or host. Previously, you could only add a VM or host by editing an affinity group. BZ# 1450351 With this update, you can set the reason for shutting down or powering off a virtual machine when using a REST API request to execute the shutdown or power-off. BZ# 1455465 In this release, the default "optimized for" value optimization type for bundled templates is now set to "Server". BZ# 1475774 Previously, when creating/managing an iSCSI storage domain, there was no indication that the operation may take a long time. In this release, the following message has been added: "Loading... A large number of LUNs may slow down the operation." BZ# 1477049 With this update, unmanaged networks are viewable by the user on the host NICs page at a glance. Each NIC indicates whether one of its networks is unmanaged by oVirt engine. Previously, to view this indication, the user had to open the setup dialog, which was cumbersome. BZ# 1482465 With this update, when viewing clusters, you can sort by the Cluster CPU Type and Compatibility Version columns. BZ# 1512838 The current release adds a new capability: In the "Edit Template" window, you can use the "Sealed" checkbox to indicate whether a template is sealed. The Compute > Templates window has a new "Sealed" column, which displays this information. BZ# 1523289 With this update, you can check the list of hosts that are not configured for metrics, that is, those hosts on which the Collectd and Rsyslog/Fluentd services are not running. First, run the playbook 'manage-ovirt-metrics-services.yml' by entering: Then, check the file /etc/ovirt-engine-metrics/hosts_not_configured_for_metrics . BZ# 1546838 The current release displays a new warning when you use 'localhost' as an FQDN: "[WARNING] Using the name 'localhost' is not recommended, and may cause problems later on." BZ# 1547937 This release adds a progress bar for the disk synchronization stage of Live Storage Migration. BZ# 1564280 This enhancement adds support for OVMF with SecureBoot, which enables UEFI support for Virtual Machines. BZ# 1572155 The current release adds the VM's current state and uptime to the Compute > Virtual Machine: General tab. BZ# 1574443 Previously, it was problematic to flip the host into the maintenance state while it was flipping between connecting and activating state. In this release, the host, regardless of its initial state before restart, will be put into maintenance mode after restarting the host using power management configuration. BZ# 1581417 All new clusters with x86 architecture and compatibility version 4.4 or higher now set the BIOS Type to the Q35 Chipset by default, instead of the i440FX chipset. BZ# 1593800 When creating a new MAC address pool, its ranges must not overlap with each other or with any ranges in existing MAC address pools. BZ# 1595536 When a host is running in FIPS mode, VNC must use SASL authorization instead of regular passwords because of a weak algorithm inherent to the VNC protocol. The current release facilitates using SASL by providing an Ansible role, ovirt-host-setup-vnc-sasl, which you can run manually on FIPS-enabled hosts. This role does the following: Creates an empty SASL password database. Prepares the SASL config file for qemu. Changes the libvirt config file for qemu. BZ# 1600059 Previously, when High Availability was selected for a new virtual machine, the Lease Storage Domain was set to a bootable Storage Domain automatically if the user did not already select one. In this release, a bootable Storage Domain is set as the lease Storage Domain for new High Availability virtual machines. BZ# 1602816 Previously, if you tried to deploy hosted-engine over a teaming device, it would try to proceed and then fail with an error. The current release fixes this issue. It filters out teaming devices. If only teaming devices are available, it rejects the deployment with a clear error message that describes the issue. BZ# 1603591 With this enhancement, while using cockpit or engine-setup to deploy RHV Manager as a Self-Hosted Engine, the options for specifying the NFS version include two additional versions, 4.0 and 4.2. BZ# 1622700 Previously, multipath repeatedly logged irrelevant errors for local devices. In this release, local devices are blacklisted and irrelevant errors are no longer logged. BZ# 1622946 With this update, the API reports extents information for sparse disks; which extents are data, read as zero, or unallocated (holes). This enhancement enables clients to use the imageio REST API to optimize image transfers and minimize storage requirements by skipping zero and unallocated extents. BZ# 1640192 Before this update, you could enable FIPS on a host. But because the engine was not aware of FIPS, it did not use the appropriate options with qemu when starting virtual machines, so the virtual machines were not fully operable. With this update, you can enable FIPS for a host in the Administration Portal, and the engine uses qemu with FIPS-compatible arguments. To enable FIPS for a host, in the Edit Host window, select the Kernel tab and check the FIPS mode checkbox. BZ# 1640908 Previously, if there were hundreds of Fiber Channel LUNs, the Administration Portal dialog box for adding or managing storage domains took too long to render and might become unresponsive. This enhancement improves performance: It displays a portion of the LUNs in a table and provides right and left arrows that users can click to see the or set of LUNs. As a result, the window renders normally and remains responsive regardless of how many LUNs are present. BZ# 1641694 With this update, you can start the self-hosted engine virtual machine in a paused state. To do so, enter the following command: To un-pause the virtual machine, enter the following command: BZ# 1643886 This update adds support for Hyper V enlightenment for Windows virtual machines on hosts running RHEL 8.2 with cluster compatibility level set to 4.4. Specifically, Windows virtual machines now support the following Hyper V functionality: reset vpindex runtime frequencies reenlightenment tlbflush BZ# 1647440 The current release adds a new feature: On the VM list page, the tooltip for the VM type icon shows a list of the fields you have changed between the current and the run of the virtual machine. BZ# 1651406 The current release enables you to migrate a group of virtual machines (VMs) that are in positive enforcing affinity with each other. You can use the new checkbox in the Migrate VM dialog to migrate this type of affinity group. You can use the following REST API to migrate this type of affinity group: http://ovirt.github.io/ovirt-engine-api-model/4.4/#services/vm/methods/migrate/parameters/migrate_vms_in_affinity_closure . Putting a host into maintenance also migrates this type of affinity group. BZ# 1652565 In this release, it is now possible to edit the properties of a Floating Disk in the Storage > Disks tab of Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk. BZ# 1666913 With this enhancement, if a network name contains spaces or is longer than 15 characters, the Administration Portal notifies you that the RHV Manager will rename the network using the host network's UUID as a basis for the new name. BZ# 1671876 Suppose a Host has a pair of bonded NICs using (Mode 1) Active-Backup . Previously, the user clicked Refresh Capabilities to get the current status of this bonded pair. In the current release, if the active NIC changes, it refreshes the state of the bond in the Administration Portal and REST API. You do not need to click Refresh Capabilities . BZ# 1674420 This update adds support for the following virtual CPU models: Intel Cascade Lake Server Intel Ivy Bridge BZ# 1679110 This enhancement moves the pop-up ("toast") notifications from the upper right corner to the lower right corner, so they no longer cover the action buttons. Now, the notifications rise from the bottom right corner to within 400 px of the top. BZ# 1679730 This update adds an audit log warning on an out-of-range IPv4 gateway static configuration for a host NIC. The validity of the gateway is assessed compared to the configured IP address and netmask. This gives users better feedback and helps them notice incorrect configurations. BZ# 1683108 This release adds a new 'status' column to the affinity group table that shows whether all of an affinity group's rules are satisfied (status = ok) or not (status = broken). The "Enforcing" option does not affect this status. BZ# 1687345 Previously, RHV Manager created live virtual machine snapshots synchronously. If creating the snapshot exceeded the timeout period (default 180 seconds), the operation failed. These failures tended to happen with virtual machines that had large memory loads or clusters that had slow storage speeds. With this enhancement, the live snapshot operation is asynchronous and runs until it is complete, regardless of how long it takes. BZ# 1688796 With this update, a new configuration variable, AAA_JAAS_ENABLE_DEBUG , has been added to enable Kerberos/GSSAPI debug on AAA . The default value is false . To enable debugging, create a new configuration file named /etc/ovirt-engine/engine.conf.d/99-kerberos-debug.conf with the following content: AAA_JAAS_ENABLE_DEBUG=true BZ# 1691704 Red Hat Virtualization Manager virtual machines now support ignition configuration, and this feature can be used via the UI or API by any guest OS that supports it, for example, RHCOS or FCOS. BZ# 1692709 With this update, each host's boot partition is explicitly stated in the kernel boot parameters. For example: boot=/dev/sda1 or boot=UUID=<id> BZ# 1696245 Previously, while cloning a virtual machine, you could only edit the name of the virtual machine in the Clone Virtual Machine window. With this enhancement, you can fully customize any of the virtual machine settings in the Clone Virtual Machine window. This means, for example, that you can clone a virtual machine into a different storage domain. BZ# 1700021 Previously, if a Certificate Authority ca.pem file was not present, the engine-setup tool automatically regenerated all PKI files, requiring you to reinstall or re-enroll certificates for all hosts. Now, if ca.pem is not present but other PKI files are, engine-setup prompts you to restore ca.pem from backup without regenerating all PKI files. If a backup is present and you select this option, then you no longer need to reinstall or re-enroll certificates for all hosts. BZ# 1700036 This enhancement adds support for DMTF Redfish to RHV. To use this functionality, you use the Administration Portal to edit a Host's properties. On the Host's Power Management tab, you click + to add a new power management device. In the Edit fence agent window, you set Type to redfish and fill-in additional details like login information and IP/FQDN of the agent. BZ# 1700338 This enhancement enables you to use the RHV Manager's REST API to manage subscriptions and receive notifications based on specific events. In versions, you could do this only in the Administration Portal. BZ# 1710491 With this enhancement, an EVENT_ID is logged when a virtual machine's guest operating system reboots. External systems such as Cloudforms and Manage IQ rely on the EVENT_ID log messages to keep track of the virtual machine's state. BZ# 1712890 With this update, when you upgrade RHV, engine-setup notifies you if virtual machines in the environment have snapshots whose cluster levels are incompatible with the RHV version you are upgrading to. It is safe to let it proceed, but it is not safe to use these snapshots after the upgrade. For example, it is not safe to preview these snapshots. There is an exception to the above: engine-setup does not notify you if the virtual machine is running the Manager as a self-hosted engine. For hosted-engine, it provides an automatic "Yes" and upgrades the virtual machine without prompting or notifying you. It is unsafe to use snapshots of the hosted-engine virtual machine after the upgrade. BZ# 1716590 With this enhancement, on the "System" tab of the "New Virtual Machine" and "Edit Virtual Machine" windows, the "Serial Number Policy" displays the value of the "Cluster default" setting. If you are adding or editing a VM and are deciding whether to override the cluster-level serial number policy, seeing that information here is convenient. Previously, to see the cluster's default serial number policy, you had to close the VM window and navigate to the Cluster window. BZ# 1718818 This enhancement enables you to attach a SCSI host device, scsi_hostdev , to a virtual machine and specify the optimal driver for the type of SCSI device: scsi_generic : (Default) Enables the guest operating system to access OS-supported SCSI host devices attached to the host. Use this driver for SCSI media changers that require raw access, such as tape or CD changers. scsi_block : Similar to scsi_generic but better speed and reliability. Use for SCSI disk devices. If trim or discard for the underlying device is desired, and it's a hard disk, use this driver. scsi_hd : Provides performance with lowered overhead. Supports large numbers of devices. Uses the standard SCSI device naming scheme. Can be used with aio-native. Use this driver for high-performance SSDs. virtio_blk_pci : Provides the highest performance without the SCSI overhead. Supports identifying devices by their serial numbers. BZ# 1726494 qemu-guest-agent for OpenSUSE guests has been updated to qemu-guest-agent-3.1.0-lp151.6.1 build. BZ# 1726907 With this update, you can select Red Hat CoreOS (RHCOS) as the operating system for a virtual machine. When you do so, the initialization type is set to ignition . RHCOS uses ignition to initialize the virtual machine, differentiating it from RHEL. BZ# 1731395 Previously, with every security update, a new CPU type was created in the vdc_options table under the key ServerCPUList in the database for all affected architectures. For example, the Intel Skylake Client Family included the following CPU types: Intel Skylake Client Family Intel Skylake Client IBRS Family Intel Skylake Client IBRS SSBD Family Intel Skylake Client IBRS SSBD MDS Family With this update, only two CPU Types are now supported for any CPU microarchitecture that has security updates, keeping the CPU list manageable. For example: Intel Skylake Client Family Secure Intel Skylake Client Family The default CPU type will not change. The Secure CPU type will contain the latest updates. BZ# 1732738 Modernizing the software stack of ovirt-engine for build and runtime using java-11-openjdk. Java 11 openjdk is the new LTS version from Red Hat. BZ# 1733031 To transfer virtual machines between data centers, you use data storage domains because export domains were deprecated. However, moving a data storage domain to a data center that has a higher compatibility level (DC level) can upgrade its storage format version, for example, from V3 to V5. This higher format version can prevent you from reattaching the data storage domain to the original data center and transferring additional virtual machines. In the current release, if you encounter this situation, the Administration Portal asks you to confirm that you want to update the storage domain format, for example, from 'V3' to 'V5'. It also warns that you will not be able to attach it back to an older data center with a lower DC level. To work around this issue, you can create a destination data center that has the same compatibility level as the source data center. When you finish transferring the virtual machines, you can increase the DC level. BZ# 1733932 With this update, you can remove an unregistered entity, such as a virtual machine, a template, or a disk, without importing it into the environment. BZ# 1734727 The current release updates the ovirt-engine-extension-logger-log4j package from OpenJDK version 8 to version 11 so it aligns with the oVirt engine. BZ# 1739557 With this update, you can enable encryption for live migration of virtual machines between hosts in the same cluster. This provides more protection to data transferred between hosts. You can enable or disable encryption in the Administration Portal, in the Edit Cluster dialog box, under Migration Policy > Additional Properties. Encryption is disabled by default. BZ# 1740644 The current release adds a configuration option, VdsmUseNmstate, which you can use to enable nmstate on every new host with cluster compatibility level >= 4.4. BZ# 1740978 When a VM from the older compatibility version is imported, its configuration has to be updated to be compatible with the current cluster compatibility version. This enhancement adds a warning to the audit log that lists the updated parameters. BZ# 1745019 The current release adds support for running virtual machines on hosts that have an Intel Snow Ridge CPU. There are two ways to enable this capability: Enable a virtual machine's Pass-Through Host CPU setting and configure it to Start Running On on Specific Host(s) that have a Snow Ridge CPU. Set cpuflags in the virtual machine's custom properties to +gfni,+cldemote . BZ# 1748097 In this release, it is now possible to edit the properties of a Floating Virtual Disk in the Storage > Disks tab of the Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk. You can also update floating virtual disk properties using the REST API update put command described in the Red Hat Virtualization REST API Guide . BZ# 1749284 Before this update, the live snapshot operation was synchronized, such that if VDSM required more than 180 seconds to create a snapshot, the operation failed, preventing snapshots of some virtual machines, such as those with large memory loads, or slow storage. With this update, the live snapshot operation is asynchronous, so the operation runs until it ends successfully, regardless of how long it takes. BZ# 1751268 The current release adds a new Insights section to the RHV welcome or landing page. This section contains two links: "Insights Guide" links to the "Deploying Insights in Red Hat Virtualization Manager" topic in the Administration Guide. "Insights Dashboard" links to the Red Hat Insights Dashboard on the Customer Portal. BZ# 1752995 With this update, the default action in the VM Portal's dashboard for a running virtual machine is to open a console. Before this update, the default action was "Suspend". Specifically, the default operation for a running VM is set to "SPICE Console" if the virtual machine supports SPICE, or "VNC Console" if the virtual machine only supports VNC. For a virtual machine running in headless mode, the default action is still "Suspend". BZ# 1757320 This update provides packages required to run oVirt Node and oVirt CentOS Linux hosts based on CentOS Linux 8. BZ# 1758289 When you remove a host from the RHV Manager, it can create duplicate entries for a host-unreachable event in the RHV Manager database. Later, if you add the host back to the RHV Manager, these entries can cause networking issues. With this enhancement, if this type of event happens, the RHV Manager prints a message to the events tab and log. The message notifies users of the issue and explains how to avoid networking issues if they add the host back to RHV Manager. BZ# 1763812 The current release moves the button to Remove a virtual machine to the "more" menu (three dots in the upper-right area). This was done to improve usability: Too many users pressed the Remove button, mistakenly believing it would remove a selected item in the details view, such as a snapshot. They did not realize it would delete the virtual machine. The new location should help users avoid this kind of mistake. BZ# 1764788 In this release, Ansible Runner is installed by default and allows running Ansible playbooks directly in the Red Hat Virtualization Manager. BZ# 1767319 In this release, modifying a MAC address pool or modifying the range of a MAC address pool that has any overlap with existing MAC address pool ranges, is strictly forbidden. BZ# 1768844 With this enhancement, when you add a host to a cluster, it has the advanced virtualization channel enabled, so the host uses the latest supported libvirt and qemu packages. BZ# 1768937 With this enhancement, the Administration Portal enables you to copy a host network configuration from one host to another by clicking a button. Copying network configurations this way is faster and easier than configuring each host separately. BZ# 1771977 On RHV-4.4, NetworkManager manages the interface and static routes. As a result, you can make more robust modifications to static routes using Network Manager Stateful Configuration (nmstate). BZ# 1777877 This release adds Grafana as a user interface and visualization tool for monitoring the Data Warehouse. You can install and configure Grafana during engine-setup. Grafana includes pre-built dashboards that present data from the ovirt_engine_history PostgreSql data warehouse database. BZ# 1779580 The current release updates the Documentation section of the RHV welcome or landing page. This makes it is easier to access the current documentation and facilitate access to translated documentation in the future. The links now point to the online documentation on the Red Hat customer portal. The "Introduction to the Administration Portal" guide and "REST API v3 Guide" are now obsolete and have been removed. The rhvm-doc package is obsolete and has been removed. BZ# 1780943 Previously, a live snapshot of a virtual machine could take an infinite amount of time, locking the virtual machine. With this release, you can set a limit on the amount of time an asynchronous live snapshot can take using the command engine-config -s LiveSnapshotTimeoutInMinutes=<time> where <time> is a value in minutes. After the set time passes, the snapshot aborts, releasing the lock and enabling you to use the virtual machine. The default value of <time> is 30 . BZ# 1796809 The apache-sshd library is not bundled anymore in the rhvm-dependencies package. The apache-sshd library is now packaged in its own rpm package. BZ# 1798127 apache-commons-collections4 has been packaged for Red Hat Virtualization Manager consumption. The package is an extension of the Java Collections Framework. BZ# 1798403 Previously, the Windows guest tools were delivered as virtual floppy disk ( .vfd ) files. With this release, the virtual floppy disk is removed and the Windows guest tools are included as a virtual CD-ROM. To install the Windows guest tools, check the Attach Windows guest tools CD check box when installing a Windows virtual machine. BZ# 1806339 The current release changes the Huge Pages label to Free Huge Pages so it is easier to understand what the values represent. BZ# 1813831 This enhancement enables you to remove incremental backup root checkpoints. Backing up a virtual machine (VM) creates a checkpoint in libvirt and the RHV Manager's database. In large scale environments, these backups can produce a high number of checkpoints. When you restart virtual machines, the Manager redefines their checkpoints on the host; if there are many checkpoints, this operation can degrade performance. The checkpoints' XML descriptions also consume a lot of storage. This enhancement provides the following operations: View all the VM checkpoints using the new checkpoints service under the VM service - GET path-to-engine/api/vms/vm-uuid/checkpoints View a specific checkpoint - GET path-to-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid Remove the oldest (root) checkpoint from the chain - DELETE path-to-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid BZ# 1821487 Previously, network tests timed out after 2 seconds. The current release increases the timeout period from 2 seconds to 5 seconds. This reduces unnecessary timeouts when the network tests require more than 2 seconds to pass. BZ# 1821930 With this enhancement, RHEL 7-based hosts have SPICE encryption enabled during host deployment. Only TLSv1.2 and newer protocols are enabled. Available ciphers are limited as described in BZ1563271 RHEL 8-based hosts do not have SPICE encryption enabled. Instead, they rely on defined RHEL crypto policies (similar to VDSM BZ1179273). BZ# 1824117 The usbutils and net-tools packages have been added to the RHV-H optional channel. This eases the installation of "iDRAC Service Module" on Dell PowerEdge systems. BZ# 1831031 This enhancement increases the maximum memory limit for virtual machines to 6TB. This also applies to virtual machines with cluster level 4.3 in RHV 4.4. BZ# 1841083 With this update, the maximum memory size for 64-bit virtual machines based on x86_64 or ppc64/ppc64le architectures is now 6 TB. This limit also applies to virtual machines based on x86_64 architecture in 4.2 and 4.3 Cluster Levels. BZ# 1845017 Starting with this release, the Grafana dashboard for the Data Warehouse is installed by default to enable easy monitoring of Red Hat Virtualization metrics and logs. The Data Warehouse is installed by default at Basic scale resource use. To obtain the full benefits of Grafana, it is recommended to update the Data Warehouse scale to Full (to be able to view a larger data collection interval of up to 5 years). Full scaling may require migrating the Data Warehouse to a separate virtual machine. For Data Warehouse scaling instructions, see Changing the Data Warehouse Sampling Scale For instructions on migrating to or installing on a separate machine, see Migrating the Data Warehouse to a Seperate Machine . and Installing and Configuring Data Warehouse on a Separate Machine BZ# 1848381 The current release adds a panel to the beginning of each Grafana dashboard describing the reports it displays and their purposes. 6.14.3. Rebase: Bug Fixes and Enhancements These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization: BZ# 1700867 The amkeself package has been rebased to version: 2.4.0. Highlights, important fixes, or notable enhancements: v2.3.0: Support for archive encryption via GPG or OpenSSL. Added LZO and LZ4 compression support. Options to set the packaging date and stop the umask from being overridden. Optionally ignore check for available disk space when extracting. New option to check for root permissions before extracting. v2.3.1: Various compatibility updates. Added unit tests for Travis CI in the GitHub repo. New --tar-extra, --untar-extra, --gpg-extra, --gpg-asymmetric-encrypt-sign options. v2.4.0: Added optional support for SHA256 archive integrity checksums. BZ# 1701530 Rebase package(s) to version: 0.1.2 With this update, the ovirt-cockpit-sso package supports RHEL 8. BZ# 1713700 Rebase package(s) to version: spice-qxl-wddm-dod 0.19 Highlights, important fixes, or notable enhancements: Add 800x800 resolution Improve performance vs spice server 14.0 and earlier Fix black screen on driver uninstall on OVMF platforms Fix black screen on return from S3 BZ# 1796815 The Object-Oriented SNMP API for Java Managers and Agents (snmp4j) library has been packaged for RHV-M consumption. The library was previously provided by the rhvm-dependencies package and is now provided as a standalone package. BZ# 1797316 Upgrade package(s) to version: rhv-4.4.0-23 Highlights and important bug fixes: Enhancements to VM snapshots caused a regression due to inconsistencies between the VDSM and RHV Manager versions. This upgrade fixes the issue by synchronizing the RHV Manager version to match the VDSM version. BZ# 1798114 Rebase of the apache-commons-digester package to version 2.1. This update is a minor release with new features. See the Apache release notes for more information. BZ# 1798117 Rebase of the apache-commons-configuration package to version 1.10. This update includes minor bug fixes and enhancements. See the Apache release notes for more information. BZ# 1799171 With this rebase, package ws-commons-utils has been updated to version 1.0.2 which provides following changes: Updated a non-static "newDecoder" method in the Base64 class to be static. Fixed the completely broken CharSetXMLWriter. BZ# 1807047 The m2crypto package has been built for use with the current version of RHV Manager. This package enables you to call OpenSSL functions from Python scripts. BZ# 1818745 With this release, Red Hat Virtualization is ported to Python 3. It no longer depends on Python 2. 6.14.4. Rebase: Enhancements Only These items are rebases of enhancements included in this release of Red Hat Virtualization: BZ# 1698009 The openstack-java-sdk package has been rebased to version: 3.2.8. Highlights and notable enhancements: Refactored the package to use newer versions of these dependent libraries: Upgraded jackson to com.fasterxml version 2.9.x Upgraded commons-httpclient to org.apache.httpcomponents version 4.5 BZ# 1720686 With this rebase ovirt-scheduler-proxy packages have been updated to version 0.1.9 introducing support for RHEL 8 and a refactor of the code for Python3 and Java 11 support. 6.14.5. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1745302 oVirt 4.4 replaces the ovirt-guest-tools with a new WiX-based installer, included in Virtio-Win. You can download the ISO file containing the Windows guest drivers, agents, and installers from https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/ BZ# 1838159 With this release, you can add hosts to RHV Manager that do not provide standard rsa-sha-1 SSH public keys but only provide rsa-sha256/rsa-sha-512 SSH public keys instead, such as CentOS 8 hosts with FIPS hardening enabled. BZ# 1844389 On non-production systems, you can use CentOS Stream as an alternative to CentOS Linux. 6.14.6. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1809116 There is currently a known issue: Open vSwitch (OVS) does not work with nmstate-managed hosts. Therefore, OVS clusters cannot contain RHEL 8 hosts. Workaround: In clusters that use OVS, do not upgrade hosts to RHEL 8. BZ# 1810550 The current release contains a known issue: When the RHV Manager tries to change the mode of an existing bond to mode balance-tlb 5 or mode balance-alb 6, the host fails to apply this change. The Manager reports this as a user-visible error. To work around this issue, remove the bond and create a new one with the desired mode. A solution is presently being worked on and, if successful, is intended for RHEL 8.2.1. BZ# 1813694 Known issue: If you configure a virtual machine's BIOS Type and Emulation Machine Type with mismatched settings, the virtual machine fails when you restart it. Workaround: To avoid problems, configure the BIOS Type and Emulation Machine Type with the proper settings for your hardware. The current release helps you avoid this issue: Adding a Host to a new cluster with auto-detect sets the BIOS Type accordingly. BZ# 1829656 Known issue: Unsubscribed RHVH hosts do not get package updates when you perform a 'Check for upgrade' operation. Instead, you get a 'no updates found' message. This happens because RHVH hosts that are not registered to Red Hat Subscription Management (RHSM) do not have repos enabled. Workaround: To get updates, register the RHVH host with Red Hat Subscription Management (RHSM). BZ# 1836181 The current release contains a known issue: If a VM has a bond mode 1 (active-backup) over an SR-IOV vNIC and VirtIO vNIC, the bond might stop working after the VM migrates to a host with SR-IOV on a NIC that uses an i40e driver, such as the Intel X710. BZ# 1852422 Registration fails for user accounts that belong to multiple organizations Currently, when you attempt to register a system with a user account that belongs to multiple organizations, the registration process fails with the error message You must specify an organization for new units . To work around this problem, you can either: Use a different user account that does not belong to multiple organizations. Use the Activation Key authentication method available in the Connect to Red Hat feature for GUI and Kickstart installations. Skip the registration step in Connect to Red Hat and use the Subscription Manager to register your system post-installation. BZ# 1859284 If you create VLANs on virtual functions of SR-IOV NICs, and the VLAN interface names are longer than ten characters, the VLANs fail. This happens because the naming convention for VLAN interfaces, parent_device.VLAN_ID , tends to produce names that exceed the 10-character limit. The workaround for this issue is to create udev rules as described in 1854851 . BZ# 1860923 In RHEL 8.2, ignoredisk --drives is not recognized by Anaconda in Kickstart files correctly. Consequently, when installing or reinstalling the host's operating system, it is strongly recommended that you either detach any existing non-OS storage that is attached to the host, or use ignoredisk --only-use to avoid accidental initialization of these disks, and with that, potential data loss. BZ# 1863045 When you upgrade Red Hat Virtualization with a storage domain that is locally mounted on / (root), the data might be lost. Use a separate logical volume or disk to prevent possible loss of data during upgrades. If you are using / (root) as the locally mounted storage domain, migrate your data to a separate logical volume or disk prior to upgrading. 6.14.7. Removed Functionality BZ# 1399714 Version 3 of the Python SDK has been deprecated since version 4.0 of oVirt. The current release removes it completely, along with version 3 of the REST API. BZ# 1399717 Version 3 of the Java SDK has been deprecated since version 4.0 of oVirt. The current release removes it completely, along with version 3 of the REST API. BZ# 1638675 The current release removes OpenStack Neutron deployment, including the automatic deployment of the neutron agents through the Network Provider tab in the New Host window and the AgentConfiguration in the REST-API. Use the following components instead: To deploy OpenStack hosts, use the OpenStack Platform Director/TripleO. The Open vSwitch interface mappings are already managed automatically by VDSM in Clusters with switch type OVS. To manage the deployment of ovirt-provider-ovn-driver on a cluster, update the cluster's "Default Network Provider" attribute. BZ# 1658061 RHV 4.3 was shipping drivers for Windows XP and Windows Server 2k3. Both of these operating systems are obsolete and unsupported. The current release removes these drivers. BZ# 1698016 Previously, the cockpit-machines-ovirt package was deprecated in Red Hat Virtualization version 4.3 (reference bug #1698014). The current release removes the cockpit-machines-ovirt from the ovirt-host dependencies and RHV-H image. BZ# 1703840 The vdsm-hook-macspoof has been dropped from the VDSM code. If you still require the ifacemacspoof hook, you can find and fix the vnic profiles using a script similar to the one provided in the commit message . BZ# 1712255 Support for datacenter and cluster levels earlier than version 4.2 has been removed. BZ# 1725775 Previously, the screen package was deprecated in RHEL 7.6. With this update to RHEL 8-based hosts, the screen package is removed. The current release installs the tmux package on RHEL 8-based hosts instead of screen . BZ# 1728667 The current release removes heat-cfntools, which is not used in rhvm-appliance and RHV. Updates to heat-cfntools are available only through OSP. BZ# 1746354 With this release, the Application Provisioning Tool service (APT) is removed. The APT service could cause a Windows virtual machine to reboot without notice, causing possible data loss. With this release, the virtio-win installer replaces the APT service. BZ# 1753889 In RHV version 4.4, oVirt Engine REST API v3 has been removed. Update your custom scripts to use REST API v4. BZ# 1753894 The oVirt Engine SDK 3 Java bindings are not shipped anymore with oVirt 4.4 release. BZ# 1753896 The oVirt Python SDK version 3 has been removed from the project. You need to upgrade your scripts to use Python SDK version 4. BZ# 1795684 Hystrix monitoring integration has been removed from ovirt-engine due to limited adoption and difficulty to maintain. BZ# 1796817 The Object-Oriented SNMP API for Java Managers and Agents (snmp4j) library is no longer bundled with the rhvm-dependencies package. It is now provided as a standalone rpm package (Bug #1796815). BZ# 1818554 The current version of RHV removes libvirt packages that provided non-socket activation. Now it contains only libvirt versions that provide socket activation. Socket activation provides better resource handling: There is no dedicated active daemon; libvirt is activated for certain tasks and then exits. BZ# 1827177 Metrics Store support has been removed in Red Hat Virtualization 4.4. Administrators can use the Data Warehouse with Grafana dashboards (deployed by default with Red Hat Virtualization 4.4) to view metrics and inventory reports. See the Grafana documentation for information on Grafana. Administrators can also send metrics and logs to a standalone Elasticsearch instance. BZ# 1846596 In versions, the katello-agent package was automatically installed on all hosts as a dependency of the ovirt-host package. The current release, RHV 4.4 removes this dependency to reflect the removal of the katello-agent from Satellite 6.7. Instead, you can now use katello-host-tools, which enables users to install the correct agent for their version of Satellite.
[ "dnf config-manager --set-enabled rhel-8-openstack-cinderlib-rpms", "sudo dnf install python3-cinderlib", "engine-backup --scope=all --mode=backup --file=cinderlib_from_old_engine --log=log_cinderlib_from_old_engine", "engine-backup --mode=restore --file=/root/cinderlib_from_old_engine --log=/root/log_cinderlib_from_old_engine --provision-all-databases --restore-permissions", "update-crypto-policies --set LEGACY", "/usr/share/ovirt-engine-metrics/configure_ovirt_machines_for_metrics.sh --playbook=manage-ovirt-metrics-services.yml", "hosted-engine --vm-start-paused", "hosted-engine --vm-start" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/red_hat_virtualization_4_4_general_availability_ovirt_4_4_1
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.24/proc-providing-feedback-on-redhat-documentation
1.2. About this guide
1.2. About this guide This guide describes how to import virtual machines from foreign hypervisors to Red Hat Enterprise Virtualization and KVM managed by libvirt. 1.2.1. Audience This guide is intended for system administrators who manage a virtualized environment using Red Hat Enterprise Virtualization or Red Hat Enterprise Linux. An advanced level of system administration, preferably including familiarity with virtual machine data center operations, is assumed. This document is not intended for beginners.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-introducing_v2v-about_this_guide
Chapter 14. Database cleaning
Chapter 14. Database cleaning The Compute service includes an administrative tool, nova-manage , that you can use to perform deployment, upgrade, clean-up, and maintenance-related tasks, such as applying database schemas, performing online data migrations during an upgrade, and managing and cleaning up the database. Director automates the following database management tasks on the overcloud by using cron: Archives deleted instance records by moving the deleted rows from the production tables to shadow tables. Purges deleted rows from the shadow tables after archiving is complete. 14.1. Configuring database management The cron jobs use default settings to perform database management tasks. By default, the database archiving cron jobs run daily at 00:01, and the database purging cron jobs run daily at 05:00, both with a jitter between 0 and 3600 seconds. You can modify these settings as required by using heat parameters. Procedure Open your Compute environment file. Add the heat parameter that controls the cron job that you want to add or modify. For example, to purge the shadow tables immediately after they are archived, set the following parameter to "True": For a complete list of the heat parameters to manage database cron jobs, see Configuration options for the Compute service automated database management . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 14.2. Configuration options for the Compute service automated database management Use the following heat parameters to enable and modify the automated cron jobs that manage the database. Table 14.1. Compute (nova) service cron parameters Parameter Description NovaCronArchiveDeleteAllCells Set this parameter to "True" to archive deleted instance records from all cells. Default: True NovaCronArchiveDeleteRowsAge Use this parameter to archive deleted instance records based on their age in days. Set to 0 to archive data older than today in shadow tables. Default: 90 NovaCronArchiveDeleteRowsDestination Use this parameter to configure the file for logging deleted instance records. Default: /var/log/nova/nova-rowsflush.log NovaCronArchiveDeleteRowsHour Use this parameter to configure the hour at which to run the cron command to move deleted instance records to another table. Default: 0 NovaCronArchiveDeleteRowsMaxDelay Use this parameter to configure the maximum delay, in seconds, before moving deleted instance records to another table. Default: 3600 NovaCronArchiveDeleteRowsMaxRows Use this parameter to configure the maximum number of deleted instance records that can be moved to another table. Default: 1000 NovaCronArchiveDeleteRowsMinute Use this parameter to configure the minute past the hour at which to run the cron command to move deleted instance records to another table. Default: 1 NovaCronArchiveDeleteRowsMonthday Use this parameter to configure on which day of the month to run the cron command to move deleted instance records to another table. Default: * (every day) NovaCronArchiveDeleteRowsMonth Use this parameter to configure in which month to run the cron command to move deleted instance records to another table. Default: * (every month) NovaCronArchiveDeleteRowsPurge Set this parameter to "True" to purge shadow tables immediately after scheduled archiving. Default: False NovaCronArchiveDeleteRowsUntilComplete Set this parameter to "True" to continue to move deleted instance records to another table until all records are moved. Default: True NovaCronArchiveDeleteRowsUser Use this parameter to configure the user that owns the crontab that archives deleted instance records and that has access to the log file the crontab uses. Default: nova NovaCronArchiveDeleteRowsWeekday Use this parameter to configure on which day of the week to run the cron command to move deleted instance records to another table. Default: * (every day) NovaCronPurgeShadowTablesAge Use this parameter to purge shadow tables based on their age in days. Set to 0 to purge shadow tables older than today. Default: 14 NovaCronPurgeShadowTablesAllCells Set this parameter to "True" to purge shadow tables from all cells. Default: True NovaCronPurgeShadowTablesDestination Use this parameter to configure the file for logging purged shadow tables. Default: /var/log/nova/nova-rowspurge.log NovaCronPurgeShadowTablesHour Use this parameter to configure the hour at which to run the cron command to purge shadow tables. Default: 5 NovaCronPurgeShadowTablesMaxDelay Use this parameter to configure the maximum delay, in seconds, before purging shadow tables. Default: 3600 NovaCronPurgeShadowTablesMinute Use this parameter to configure the minute past the hour at which to run the cron command to purge shadow tables. Default: 0 NovaCronPurgeShadowTablesMonth Use this parameter to configure in which month to run the cron command to purge the shadow tables. Default: * (every month) NovaCronPurgeShadowTablesMonthday Use this parameter to configure on which day of the month to run the cron command to purge the shadow tables. Default: * (every day) NovaCronPurgeShadowTablesUser Use this parameter to configure the user that owns the crontab that purges the shadow tables and that has access to the log file the crontab uses. Default: nova NovaCronPurgeShadowTablesVerbose Use this parameter to enable verbose logging in the log file for purged shadow tables. Default: False NovaCronPurgeShadowTablesWeekday Use this parameter to configure on which day of the week to run the cron command to purge the shadow tables. Default: * (every day)
[ "parameter_defaults: NovaCronArchiveDeleteRowsPurge: True", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_database-cleaning_database-cleaning
Chapter 9. Service Registry artifact reference
Chapter 9. Service Registry artifact reference This chapter provides reference information on the supported artifact types, states, and metadata that are stored in Service Registry. Section 9.1, "Service Registry artifact types" Section 9.2, "Service Registry artifact states" Section 9.3, "Service Registry artifact metadata" Additional resources For more information, see the Apicurio Registry REST API documentation . 9.1. Service Registry artifact types You can store and manage a wide range of schema and API artifact types in Service Registry. Table 9.1. Service Registry artifact types Type Description ASYNCAPI AsyncAPI specification AVRO Apache Avro schema GRAPHQL GraphQL schema JSON JSON Schema KCONNECT Apache Kafka Connect schema OPENAPI OpenAPI specification PROTOBUF Google protocol buffers schema WSDL Web Services Definition Language XML Extensible Markup Language XSD XML Schema Definition 9.2. Service Registry artifact states The valid artifact states in Service Registry are ENABLED , DISABLED , and DEPRECATED . Table 9.2. Service Registry artifact states State Description ENABLED Basic state, all the operations are available. DISABLED The artifact and its metadata is viewable and searchable using the Service Registry web console, but its content cannot be fetched by any client. DEPRECATED The artifact is fully usable but a header is added to the REST API response whenever the artifact content is fetched. The Service Registry Rest Client will also log a warning whenever it sees deprecated content. 9.3. Service Registry artifact metadata When an artifact is added to Service Registry, a set of metadata properties is created and stored along with the artifact content. This metadata consists of system-generated or user-generated properties that are read-only, and editable properties that you can update after the artifact is created. Table 9.3. Service Registry system-generated metadata Property Type Description contentId integer Unique identifier of artifact content in Service Registry. The same content ID can be shared by multiple artifact versions when artifact versions have identical content. For example, a content ID of 4 can be used by multiple artifact versions with the same content. createdBy string The name of the user who created the artifact. createdOn date The date and time when the artifact was created, for example, 2023-10-11T14:15:28Z . globalId integer Globally unique identifier of an artifact version in Service Registry. For example, a global ID of 1 is assigned to the first artifact version created in Service Registry. modifiedBy string The name of the user who modified the artifact. modifiedOn date The date and time at which the artifact was modified, for example, 2023-10-11T14:15:28Z . type ArtifactType The supported artifact type, for example, AVRO , OPENAPI , or PROTOBUF . Table 9.4. Service Registry user-provided or system-generated metadata Property Type Description groupId string Unique identifier of an artifact group in Service Registry, for example, development or production . When creating an artifact by using the Service Registry web console, if you do not provide a group ID, this is set to default . You must provide a group ID when using the Apicurio Registry REST API, Java client, or Maven plug-in. id string Unique identifier of an artifact in Service Registry. You can provide an artifact ID or use the UUID generated by Service Registry, for example, 8d168cad-1865-4e6c-bb7e-04e8be005bea . Different versions of an artifact use the same artifact ID, but have different global IDs. references array of ArtifactReference Optional set of artifact references contained in the artifact, which you can provide when creating the artifact. The following simple example shows a single artifact reference: [{"groupId":"my-group","artifactId":"ItemId","version":"1","name":"com.example.common.ItemId"}] . version integer The latest version of the artifact. You can use the generated version, for example, 3 , or provide a version by using the Service Registry REST API or Maven plug-in, for example, 2.1.6 . Table 9.5. Service Registry editable metadata Property Type Description description string Optional meaningful description of the artifact, for example, This is a simple OpenAPI for testing . You can provide a description, or it can be automatically discovered from the info section of OpenAPI and AsyncAPI artifacts, if already provided. labels array of string Optional comma-separated list of labels used to filter and search for the artifact, for example, test,protobuf . Provided by the user. name string Optional human-readable name of the artifact, for example, My first Avro schema . You can provide a description, or it can be automatically discovered from the info section of OpenAPI and AsyncAPI artifacts, if the title field has a value. properties map Optional list of user-defined name-value pairs associated with the artifact. The name and value must be strings, for example, my-key and my-value . state ArtifactState The latest state of the artifact: ENABLED , DISABLED , or DEPRECATED . Defaults to ENABLED . Updating artifact metadata You can use the Service Registry REST API or web console to update the set of editable metadata properties. You can update the state property only by using the Service Registry REST API. Additional resources For more details, see the /artifacts/{artifactId}/meta endpoint in the Apicurio Registry REST API documentation .
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/registry-artifact-reference_registry
Installing
Installing OpenShift Container Platform 4.9 Installing and configuring OpenShift Container Platform clusters Red Hat OpenShift Documentation Team
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a", "oc get machines -A", "NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "sudo ./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login --authfile pull-secret.txt -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login --authfile pull-secret.txt -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade", "sudo ./mirror-registry uninstall -v --quayRoot <example_directory_name>", "x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "podman login registry.redhat.io", "REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json", "podman login <mirror_registry>", "oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6", "src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2", "oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2", "podman login <mirror_registry>", "oc adm catalog mirror file://local/index/<repo>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>/<namespace> --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]", "manifests-<index_image_name>-<random_number>", "manifests-index/<namespace>/<index_image_name>-<random_number>", "platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2", "compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole", "controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", ":_content-type: CONCEPT", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 type: m5.xlarge replicas: 3 compute: 7 - hyperthreading: Enabled 8 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 9 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 11 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-96c6f8f7 12 serviceEndpoints: 13 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 14 sshKey: ssh-ed25519 AAAA... 15 pullSecret: '{\"auths\": ...}' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 type: m5.xlarge replicas: 3 compute: 7 - hyperthreading: Enabled 8 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 9 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 12 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-96c6f8f7 13 serviceEndpoints: 14 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 15 sshKey: ssh-ed25519 AAAA... 16 pullSecret: '{\"auths\": ...}' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "subnets: - subnet-1 - subnet-2 - subnet-3", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 type: m5.xlarge replicas: 3 compute: 7 - hyperthreading: Enabled 8 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 9 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 11 userTags: adminContact: jdoe costCenter: 7536 subnets: 12 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 13 serviceEndpoints: 14 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 type: m5.xlarge replicas: 3 compute: 7 - hyperthreading: Enabled 8 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 9 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 11 userTags: adminContact: jdoe costCenter: 7536 subnets: 12 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 13 serviceEndpoints: 14 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17 pullSecret: '{\"auths\": ...}' 18", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 type: m5.xlarge replicas: 3 compute: 7 - hyperthreading: Enabled 8 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 9 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 11 userTags: adminContact: jdoe costCenter: 7536 subnets: 12 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 13 serviceEndpoints: 14 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17 publish: Internal 18 pullSecret: '{\"auths\": ...}' 19", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-gov-west-1 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 type: m5.xlarge replicas: 3 compute: 7 - hyperthreading: Enabled 8 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 9 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 11 userTags: adminContact: jdoe costCenter: 7536 subnets: 12 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 13 14 serviceEndpoints: 15 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 publish: Internal 19 pullSecret: '{\"auths\": ...}' 20 additionalTrustBundle: | 21 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 type: m5.xlarge replicas: 3 compute: 7 - hyperthreading: Enabled 8 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 9 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 11 userTags: adminContact: jdoe costCenter: 7536 subnets: 12 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 13 14 serviceEndpoints: 15 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 publish: Internal 19 pullSecret: '{\"auths\": ...}' 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ]", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.7\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.7\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: \"i3.large\" NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"m5.xlarge\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone information? Type: String PrivateHostedZoneId: Description: The Route53 private zone ID to register the etcd targets with, such as Z21IXYZABCZ2A4. Type: String PrivateHostedZoneName: Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the trailing period. Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"DNS\" Parameters: - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterDNS: default: \"Use Provided DNS Automation\" AutoRegisterELB: default: \"Use Provided ELB Automation\" PrivateHostedZoneName: default: \"Private Hosted Zone Name\" PrivateHostedZoneId: default: \"Private Hosted Zone ID\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] DoDns: !Equals [\"yes\", !Ref AutoRegisterDNS] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp EtcdSrvRecords: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"_etcd-server-ssl._tcp\", !Ref PrivateHostedZoneName]] ResourceRecords: - !Join [ \" \", [\"0 10 2380\", !Join [\".\", [\"etcd-0\", !Ref PrivateHostedZoneName]]], ] - !Join [ \" \", [\"0 10 2380\", !Join [\".\", [\"etcd-1\", !Ref PrivateHostedZoneName]]], ] - !Join [ \" \", [\"0 10 2380\", !Join [\".\", [\"etcd-2\", !Ref PrivateHostedZoneName]]], ] TTL: 60 Type: SRV Etcd0Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"etcd-0\", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master0.PrivateIp TTL: 60 Type: A Etcd1Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"etcd-1\", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master1.PrivateIp TTL: 60 Type: A Etcd2Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"etcd-2\", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master2.PrivateIp TTL: 60 Type: A Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"m4.2xlarge\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> grafana-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Fe5en-ymBEc-Wt6NL\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ]", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.7\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.7\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: \"i3.large\" NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"m5.xlarge\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone information? Type: String PrivateHostedZoneId: Description: The Route53 private zone ID to register the etcd targets with, such as Z21IXYZABCZ2A4. Type: String PrivateHostedZoneName: Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the trailing period. Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"DNS\" Parameters: - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterDNS: default: \"Use Provided DNS Automation\" AutoRegisterELB: default: \"Use Provided ELB Automation\" PrivateHostedZoneName: default: \"Private Hosted Zone Name\" PrivateHostedZoneId: default: \"Private Hosted Zone ID\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] DoDns: !Equals [\"yes\", !Ref AutoRegisterDNS] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp EtcdSrvRecords: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"_etcd-server-ssl._tcp\", !Ref PrivateHostedZoneName]] ResourceRecords: - !Join [ \" \", [\"0 10 2380\", !Join [\".\", [\"etcd-0\", !Ref PrivateHostedZoneName]]], ] - !Join [ \" \", [\"0 10 2380\", !Join [\".\", [\"etcd-1\", !Ref PrivateHostedZoneName]]], ] - !Join [ \" \", [\"0 10 2380\", !Join [\".\", [\"etcd-2\", !Ref PrivateHostedZoneName]]], ] TTL: 60 Type: SRV Etcd0Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"etcd-0\", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master0.PrivateIp TTL: 60 Type: A Etcd1Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"etcd-1\", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master1.PrivateIp TTL: 60 Type: A Etcd2Record: Condition: DoDns Type: AWS::Route53::RecordSet Properties: HostedZoneId: !Ref PrivateHostedZoneId Name: !Join [\".\", [\"etcd-2\", !Ref PrivateHostedZoneName]] ResourceRecords: - !GetAtt Master2.PrivateIp TTL: 60 Type: A Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"m4.2xlarge\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> grafana-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Fe5en-ymBEc-Wt6NL\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl aws delete --name= <name> --region= <aws_region>", "2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name> -oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name> -oidc 2021/04/08 17:50:43 Identity Provider bucket <name> -oidc deleted 2021/04/08 17:51:05 Policy <name> -openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name> -openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name> -openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name> -openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name> -openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name> -openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name> -openshift-image-registry-installer-cloud-credentials associated with IAM Role <name> -openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name> -openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name> -openshift-ingress-operator-cloud-credentials associated with IAM Role <name> -openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name> -openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name> -openshift-machine-api-aws-cloud-credentials associated with IAM Role <name> -openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name> -openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam:: <aws_account_id> :oidc-provider/ <name> -oidc.s3. <aws_region> .amazonaws.com deleted", "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_dir>", "image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker version: 4.8.2021122100 type: MarketplaceWithPlan", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: usgovvirginia resourceGroupName: existing_resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 outboundType: UserDefinedRouting 17 cloudName: AzureUSGovernmentCloud 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5", "export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "export INFRA_ID=<infra_id> 1", "export RESOURCE_GROUP=<resource_group> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}", "az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity", "export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`", "export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`", "az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"", "az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS", "export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`", "export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.9/data/data/rhcos.json | jq -r .azure.url`", "az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}", "az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"", "az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --public-access blob", "az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"", "az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1", "az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }", "export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vhdBlobURL\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Compute/images\", \"name\": \"[variables('imageName')]\", \"location\" : \"[variables('location')]\", \"properties\": { \"storageProfile\": { \"osDisk\": { \"osType\": \"Linux\", \"osState\": \"Generalized\", \"blobUri\": \"[parameters('vhdBlobURL')]\", \"storageAccountType\": \"Standard_LRS\" } } } } ] }", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"public-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }", "export BOOTSTRAP_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -n \"bootstrap.ign\" -o tsv`", "export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" 3", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string.\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }", "export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 3 --parameters baseName=\"USD{INFRA_ID}\" 4", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone the master nodes are going to be attached to\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/SRV\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]\", \"location\" : \"[variables('location')]\", \"properties\": { \"ttl\": 60, \"copy\": [{ \"name\": \"srvRecords\", \"count\": \"[length(variables('vmNames'))]\", \"input\": { \"priority\": 0, \"weight\" : 10, \"port\" : 2380, \"target\" : \"[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]\" } }] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"copy\" : { \"name\" : \"dnsCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\": \"[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]\", \"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" 3", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20", "export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300", "az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4", "apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master replicas: 3 compute: 2 - name: worker platform: {} replicas: 0 metadata: name: test-cluster 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 4 baseDomainResourceGroupName: resource_group 5 region: azure_stack_local_region 6 resourceGroupName: existing_resource_group 7 outboundType: Loadbalancer cloudName: AzureStackCloud 8 pullSecret: '{\"auths\": ...}' 9 fips: false 10 additionalTrustBundle: | 11 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 12", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5", "export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "spec: trustedCA: name: user-ca-bundle", "export INFRA_ID=<infra_id> 1", "export RESOURCE_GROUP=<resource_group> 1", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: \"true\" data: disabled: \"true\"", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}", "az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS", "export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`", "export COMPRESSED_VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.9/data/data/rhcos-amd64.json | jq -r '(.baseURI + .images.azurestack.path)'`", "az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}", "curl -O -L USD{COMPRESSED_VHD_URL}", "az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd", "az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --public-access blob", "az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"", "az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1", "link:https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/azurestack/01_vnet.json[]", "export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "link:https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/azurestack/02_storage.json[]", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters baseName=\"USD{INFRA_ID}\" 1", "export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`", "export PRIVATE_IP=`az network lb frontend-ip show -g \"USDRESOURCE_GROUP\" --lb-name \"USD{INFRA_ID}-internal\" -n internal-lb-ip --query \"privateIpAddress\" -o tsv`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z \"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" -n api-int -a USD{PRIVATE_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60", "link:https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/azurestack/03_infra.json[]", "export BOOTSTRAP_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -n \"bootstrap.ign\" -o tsv`", "export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`", "export CA=\"data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\\n')\"", "export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url \"USDBOOTSTRAP_URL\" --arg cert \"USDCA\" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`", "az deployment group create --verbose -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" \\ 3 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 4", "link:https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/azurestack/04_bootstrap.json[]", "export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" \\ 3 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 4", "link:https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/azurestack/05_masters.json[]", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" 3 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 4", "link:https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/azurestack/06_workers.json[]", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20", "export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=gcp", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", ":_content-type: CONCEPT", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 pullSecret: '{\"auths\": ...}' 13 fips: false 14 sshKey: ssh-ed25519 AAAA... 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_dir>", "deletionProtection: false disks: - autoDelete: true boot: true image: projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 labels: null sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n2-standard-4", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 12 region: us-central1 13 pullSecret: '{\"auths\": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 publish: Internal 19", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-instance-group' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-instance-group', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-instance-group --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-instance-group --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.1.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-instance-group', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-instance-group --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-instance-group --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-instance-group --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0 gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1 gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-instance-group --instance-group-zone=USD{ZONE_0} gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "export MASTER_SUBNET_CIDR='10.0.0.0/17'", "export WORKER_SUBNET_CIDR='10.0.128.0/17'", "export REGION='<region>'", "export HOST_PROJECT=<host_project>", "export HOST_PROJECT_ACCOUNT=<host_service_account_email>", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1", "export HOST_PROJECT_NETWORK=<vpc_network>", "export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>", "export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: 5 - hyperthreading: Enabled 6 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 7 region: us-central1 8 pullSecret: '{\"auths\": ...}' fips: false 9 sshKey: ssh-ed25519 AAAA... 10 publish: Internal 11", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {}", "config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: ''", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' 1 export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 export NETWORK_CIDR='10.0.0.0/16' export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-instance-group' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-instance-group', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkViewer\"", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-instance-group --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-instance-group --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.1.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-instance-group', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-instance-group --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-instance-group --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-instance-group --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0 gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1 gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-instance-group --instance-group-zone=USD{ZONE_0} gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "oc get events -n openshift-ingress --field-selector=\"reason=LoadBalancerManualChange\"", "Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description \"{\\\"kubernetes.io/service-name\\\":\\\"openshift-ingress/router-default\\\", \\\"kubernetes.io/service-ip\\\":\\\"35.237.236.234\\\"}\\\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`", "gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=\"0.0.0.0/0\" --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=USD{NETWORK_CIDR} --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-instance-group' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-instance-group', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-instance-group --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-instance-group --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.1.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-instance-group', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-instance-group --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-instance-group --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-instance-group --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0 gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1 gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-instance-group --instance-group-zone=USD{ZONE_0} gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "cp rhcos-<version>-live.x86_64.iso bootstrap.ign /mnt/ chmod 644 /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition embed -i /mnt/bootstrap.ign /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition show /mnt/rhcos-<version>-live.x86_64.iso > mybootstrap.ign", "diff -s bootstrap.ign mybootstrap.ign", "Files bootstrap.ign and mybootstrap.ign are identical", "./coreos-installer iso ignition remove /mnt/rhcos-<version>-live.x86_64.iso", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "cp rhcos-<version>-live.x86_64.iso bootstrap.ign /mnt/ chmod 644 /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition embed -i /mnt/bootstrap.ign /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition show /mnt/rhcos-<version>-live.x86_64.iso > mybootstrap.ign", "diff -s bootstrap.ign mybootstrap.ign", "Files bootstrap.ign and mybootstrap.ign are identical", "./coreos-installer iso ignition remove /mnt/rhcos-<version>-live.x86_64.iso", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "cp rhcos-<version>-live.x86_64.iso bootstrap.ign /mnt/ chmod 644 /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition embed -i /mnt/bootstrap.ign /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition show /mnt/rhcos-<version>-live.x86_64.iso > mybootstrap.ign", "diff -s bootstrap.ign mybootstrap.ign", "Files bootstrap.ign and mybootstrap.ign are identical", "./coreos-installer iso ignition remove /mnt/rhcos-<version>-live.x86_64.iso", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "example.com", "<cluster-name>.example.com", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz > oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "OCP_VERSION=<ocp_version> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep x86_64 | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL > rhcos-live.x86_64.iso", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: networkType: OVNKubernetes clusterNetwork: - cidr: <IP_address>/<prefix> 5 hostPrefix: <prefix> 6 serviceNetwork: - <IP_address>/<prefix> 7 platform: none: {} bootstrapInPlace: installationDisk: <path_to_install_drive> 8 pullSecret: '<pull_secret>' 9 sshKey: | <ssh_key> 10", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'", "cp ocp/bootstrap-in-place-for-live-iso.ign iso.ign", "coreos-installer iso ignition embed -fi iso.ign rhcos-live.x86_64.iso", "dd if=<path-to-iso> of=<path/to/usb> status=progress", "dd if=discovery_image_sno.iso of=/dev/sdb status=progress", "./openshift-install --dir=ocp wait-for install-complete", "export KUBECONFIG=ocp/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "<cluster_name>.<base_domain>", "test-cluster.example.com", "useradd kni passwd kni echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni chmod 0440 /etc/sudoers.d/kni", "su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"", "su - kni USD", "sudo subscription-manager register --username=<user> --password=<pass> --auto-attach sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms", "sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool", "sudo usermod --append --groups libvirt <user>", "sudo systemctl start firewalld sudo firewall-cmd --zone=public --add-service=http --permanent sudo firewall-cmd --reload", "sudo systemctl enable libvirtd --now", "sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images sudo virsh pool-start default sudo virsh pool-autostart default", "export PUB_CONN=<baremetal_nic_name>", "sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"", "export PROV_CONN=<prov_nic_name>", "sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"", "nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual", "ssh kni@provisioner.<cluster-name>.<domain>", "sudo nmcli con show", "NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2", "vim pull-secret.txt", "export VERSION=stable-4.9 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')", "export cmd=openshift-baremetal-install export pullsecret_file=~/pull-secret.txt export extract_dir=USD(pwd)", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE} sudo cp openshift-baremetal-install /usr/local/bin", "sudo dnf install -y podman", "sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent", "sudo firewall-cmd --reload", "mkdir /home/kni/rhcos_image_cache", "sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"", "sudo restorecon -Rv /home/kni/rhcos_image_cache/", "export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')", "export export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}", "export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')", "export RHCOS_OPENSTACK_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.openstack.formats[\"qcow2.gz\"].disk.location')", "export RHCOS_OPENSTACK_NAME=USD{RHCOS_OPENSTACK_URI##*/}", "export RHCOS_OPENSTACK_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.openstack.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')", "curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}", "curl -L USD{RHCOS_OPENSTACK_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_OPENSTACK_NAME}", "ls -Z /home/kni/rhcos_image_cache", "podman run -d --name rhcos_image_cache -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp quay.io/centos7/httpd-24-centos7:latest", "export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)", "export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"", "export CLUSTER_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_OPENSTACK_NAME}?sha256=USD{RHCOS_OPENSTACK_UNCOMPRESSED_SHA256}\"", "echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"", "echo \" clusterOSImage=USD{CLUSTER_OS_IMAGE}\"", "platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 clusterOSImage: <cluster_os_image> 2", "apiVersion: v1 baseDomain: <domain> metadata: name: <cluster-name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api-ip> ingressVIP: <wildcard-ip> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> 2 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 3 - name: <openshift-master-1> role: master bmc: address: ipmi://<out-of-band-ip> 4 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 5 - name: <openshift-master-2> role: master bmc: address: ipmi://<out-of-band-ip> 6 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 7 - name: <openshift-worker-0> role: worker bmc: address: ipmi://<out-of-band-ip> 8 username: <user> password: <password> bootMACAddress: <NIC1-mac-address> - name: <openshift-worker-1> role: worker bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> bootMACAddress: <NIC1-mac-address> rootDeviceHints: deviceName: \"/dev/disk/by-id/<disk_id>\" 9 pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'", "mkdir ~/clusterconfigs cp install-config.yaml ~/clusterconfigs", "ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>", "noProxy: .example.com,172.22.0.0/24,10.10.0.0/24", "platform: baremetal: apiVIP: <api_VIP> ingressVIP: <ingress_VIP> provisioningNetwork: \"Disabled\" 1", "machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112", "hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2", "metadata: name:", "networking: machineNetwork: - cidr:", "compute: - name: worker", "compute: replicas: 2", "controlPlane: name: master", "controlPlane: replicas: 3", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True", "platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>", "platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>", "- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated", "variant: openshift version: 4.9.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan", "butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml", "cd ~/clusterconfigs", "cd manifests", "touch cluster-network-avoid-workers-99-config.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"", "sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml", "vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-3.yaml", "spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true", "sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent sudo firewall-cmd --reload", "sudo yum -y install python3 podman httpd httpd-tools jq", "sudo mkdir -p /opt/registry/{auth,certs,data}", "host_fqdn=USD( hostname --long ) cert_c=\"<Country Name>\" # Country Name (C, 2 letter code) cert_s=\"<State>\" # Certificate State (S) cert_l=\"<Locality>\" # Certificate Locality (L) cert_o=\"<Organization>\" # Certificate Organization (O) cert_ou=\"<Org Unit>\" # Certificate Organizational Unit (OU) cert_cn=\"USD{host_fqdn}\" # Certificate Common Name (CN) openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:USD{host_fqdn}\" -subj \"/C=USD{cert_c}/ST=USD{cert_s}/L=USD{cert_l}/O=USD{cert_o}/OU=USD{cert_ou}/CN=USD{cert_cn}\"", "sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract", "htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd>", "podman create --name ocpdiscon-registry -p 5000:5000 -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry\" -e \"REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry\" -e \"REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd\" -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e \"REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true\" -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z docker.io/library/registry:2", "podman start ocpdiscon-registry", "scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txt", "host_fqdn=USD( hostname --long )", "b64auth=USD( echo -n '<username>:<passwd>' | openssl base64 )", "AUTHSTRING=\"{\\\"USDhost_fqdn:5000\\\": {\\\"auth\\\": \\\"USDb64auth\\\",\\\"email\\\": \\\"[email protected]\\\"}}\"", "jq \".auths += USDAUTHSTRING\" < pull-secret.txt > pull-secret-update.txt", "sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/bin", "VERSION=<release_version>", "LOCAL_REG='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPO='<local_repository_name>'", "/usr/local/bin/oc adm release mirror -a pull-secret-update.txt --from=USDUPSTREAM_REPO --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} --to=USDLOCAL_REG/USDLOCAL_REPO", "echo \"additionalTrustBundle: |\" >> install-config.yaml sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml", "echo \"imageContentSources:\" >> install-config.yaml echo \"- mirrors:\" >> install-config.yaml echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml echo \"- mirrors:\" >> install-config.yaml echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"", "cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log", "variant: openshift version: 4.9.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan", "butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml", "oc apply -f 99-master-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created", "oc apply -f 99-worker-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created", "oc describe machineconfigpool", "oc get provisioning -o yaml > enable-provisioning-nw.yaml", "vim ~/enable-provisioning-nw.yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningOSDownloadURL: 2 provisioningIP: 3 provisioningNetworkCIDR: 4 provisioningDHCPRange: 5 provisioningInterface: 6 watchAllNameSpaces: 7", "oc apply -f enable-provisioning-nw.yaml", "listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check", "<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin", "echo -ne \"root\" | base64", "echo -ne \"password\" | base64", "vim bmh.yaml", "--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret type: Opaque data: username: <base64-of-uid> password: <base64-of-pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> spec: online: true bootMACAddress: <NIC1-mac-address> bmc: address: <protocol>://<bmc-ip> credentialsName: openshift-worker-<num>-bmc-secret", "oc -n openshift-machine-api create -f bmh.yaml", "secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> ready true", "oc get clusteroperator baremetal", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.9.0 True False False 3d15h", "oc delete bmh -n openshift-machine-api <host_name> oc delete machine -n openshift-machine-api <machine_name>", "cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false hardwareProfile: unknown online: true EOF", "oc get bmh -n openshift-machine-api", "NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m", "cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF", "oc get bmh -A", "NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.18.2 control-plane-2.example.com available master 141m v1.18.2 control-plane-3.example.com available master 141m v1.18.2 compute-1.example.com available worker 87m v1.18.2 compute-2.example.com available worker 87m v1.18.2", "edit provisioning", "apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: \"2021-08-05T18:51:50Z\" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: \"551591\" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: preProvisioningOSDownloadURLs: {} provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 provisioningOSDownloadURL: http://192.168.111.1/images/rhcos-<version>.x86_64.qcow2.gz?sha256=<sha256> virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: \"\" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: \"\" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0", "edit machineset", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: \"2021-08-05T18:51:52Z\" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: \"551513\" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.x86_64.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2", "oc get bmh -n openshift-machine-api", "NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering", "oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml", "status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> ready true", "oc get nodes", "NAME STATUS ROLES AGE VERSION provisioner.openshift.example.com Ready master 30h v1.22.1 openshift-master-1.openshift.example.com Ready master 30h v1.22.1 openshift-master-2.openshift.example.com Ready master 30h v1.22.1 openshift-master-3.openshift.example.com Ready master 30h v1.22.1 openshift-worker-0.openshift.example.com Ready master 30h v1.22.1 openshift-worker-1.openshift.example.com Ready master 30h v1.22.1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m", "oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true", "oc get nodes", "NAME STATUS ROLES AGE VERSION provisioner.openshift.example.com Ready master 30h v1.22.1 openshift-master-1.openshift.example.com Ready master 30h v1.22.1 openshift-master-2.openshift.example.com Ready master 30h v1.22.1 openshift-master-3.openshift.example.com Ready master 30h v1.22.1 openshift-worker-0.openshift.example.com Ready master 30h v1.22.1 openshift-worker-1.openshift.example.com Ready master 30h v1.22.1 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.22.1", "ssh openshift-worker-<num>", "[kni@openshift-worker-<num>]USD journalctl -fu kubelet", "curl -s -o /dev/null -I -w \"%{http_code}\\n\" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.x86_64.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7", "sudo virsh list", "Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap running", "systemctl status libvirtd", "● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-03 21:21:07 UTC; 3 weeks 5 days ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 9850 (libvirtd) Tasks: 20 (limit: 32768) Memory: 74.8M CGroup: /system.slice/libvirtd.service ├─ 9850 /usr/sbin/libvirtd", "sudo virsh console example.com", "Connected to domain example.com Escape character is ^] Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3 SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519) SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA) SSH host key: SHA256:DH5VWhvhvagOTaLsYiVNse9ca+ZSW/30OOMed8rIGOc (RSA) ens3: fd35:919d:4042:2:c7ed:9a9f:a9ec:7 ens4: 172.22.0.2 fe80::1d05:e52e:be5d:263f localhost login:", "ssh [email protected]", "ssh [email protected]", "[core@localhost ~]USD sudo podman logs -f <container-name>", "ipmitool -I lanplus -U root -P <password> -H <out-of-band-ip> power off", "bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.x86_64.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.x86_64.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0", "ssh [email protected]", "[core@localhost ~]USD sudo podman logs -f ipa-downloader", "[core@localhost ~]USD sudo podman logs -f coreos-downloader", "[core@localhost ~]USD journalctl -xe", "[core@localhost ~]USD journalctl -b -f -u bootkube.service", "[core@localhost ~]USD sudo podman ps", "[core@localhost ~]USD sudo podman logs <ironic-api>", "bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: default #control plane node settings", "bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC hardwareProfile: unknown #worker node settings", "hostname", "hostnamectl set-hostname <hostname>", "dig api.<cluster-name>.example.com", "; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> api.<cluster-name>.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37551 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 866929d2f8e8563582af23f05ec44203d313e50948d43f60 (good) ;; QUESTION SECTION: ;api.<cluster-name>.example.com. IN A ;; ANSWER SECTION: api.<cluster-name>.example.com. 10800 IN A 10.19.13.86 ;; AUTHORITY SECTION: <cluster-name>.example.com. 10800 IN NS <cluster-name>.example.com. ;; ADDITIONAL SECTION: <cluster-name>.example.com. 10800 IN A 10.19.14.247 ;; Query time: 0 msec ;; SERVER: 10.19.14.247#53(10.19.14.247) ;; WHEN: Tue May 19 20:30:59 UTC 2020 ;; MSG SIZE rcvd: 140", "ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json", "/usr/local/bin/oc adm release mirror -a pull-secret-update.json --from=USDUPSTREAM_REPO --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} --to=USDLOCAL_REG/USDLOCAL_REPO", "UPSTREAM_REPO=USD{RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4'", "curl -k -u <user>:<password> https://registry.example.com:<registry-port>/v2/_catalog {\"repositories\":[\"<Repo-Name>\"]}", "`runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network`", "oc get all -n openshift-network-operator", "NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149m", "kubectl get network.config.openshift.io cluster -oyaml", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNetwork: - 172.30.0.0/16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OpenShiftSDN", "openshift-install create manifests", "kubectl -n openshift-network-operator get pods", "kubectl -n openshift-network-operator logs -l \"name=network-operator\"", "This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6]", "Failed Units: 2 NetworkManager-wait-online.service nodeip-configuration.service", "[core@master-X ~]USD hostname", "[core@master-X ~]USD sudo nmcli con up \"<bare-metal-nic>\"", "[core@master-X ~]USD hostname", "[core@master-X ~]USD sudo systemctl restart NetworkManager", "[core@master-X ~]USD sudo systemctl restart nodeip-configuration.service", "[core@master-X ~]USD sudo systemctl daemon-reload", "[core@master-X ~]USD sudo systemctl restart kubelet.service", "[core@master-X ~]USD sudo journalctl -fu kubelet.service", "oc get csr", "oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text", "oc delete csr <wrong_csr>", "oc get route oauth-openshift", "oc get svc oauth-openshift", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59m", "[core@master0 ~]USD curl -k https://172.30.19.162", "{ \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403", "oc logs deployment/authentication-operator -n openshift-authentication-operator", "Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"openshift-authentication-operator\", Name:\"authentication-operator\", UID:\"225c5bd5-b368-439b-9155-5fd3c0459d98\", APIVersion:\"apps/v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from \"IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting\"", "Failed Units: 1 machine-config-daemon-firstboot.service", "[core@worker-X ~]USD sudo systemctl restart machine-config-daemon-firstboot.service", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.22.1 master-1.cloud.example.com Ready master 135m v1.22.1 master-2.cloud.example.com Ready master 145m v1.22.1 worker-2.cloud.example.com Ready worker 100m v1.22.1", "oc get bmh -n openshift-machine-api", "master-1 error registering master-1 ipmi://<out-of-band-ip>", "sudo timedatectl", "Local time: Tue 2020-03-10 18:20:02 UTC Universal time: Tue 2020-03-10 18:20:02 UTC RTC time: Tue 2020-03-10 18:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: no NTP service: active RTC in local TZ: no", "variant: openshift version: 4.9.0 metadata: name: 99-master-chrony labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | server <NTP-server> iburst 1 stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-master-chrony.bu -o 99-master-chrony.yaml", "oc apply -f 99-master-chrony.yaml", "sudo timedatectl", "Local time: Tue 2020-03-10 19:10:02 UTC Universal time: Tue 2020-03-10 19:10:02 UTC RTC time: Tue 2020-03-10 19:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: active RTC in local TZ: no", "cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.22.1 master-1.example.com Ready master,worker 4h v1.22.1 master-2.example.com Ready master,worker 4h v1.22.1", "oc get pods --all-namespaces | grep -iv running | grep -iv complete", "<cluster_name>.<domain>", "test-cluster.example.com", "ipmi://<IP>:<port>?privilegelevel=OPERATOR", "ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>", "useradd kni", "passwd kni", "echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni", "chmod 0440 /etc/sudoers.d/kni", "su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"", "su - kni", "sudo subscription-manager register --username=<user> --password=<pass> --auto-attach", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms", "sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool", "sudo usermod --append --groups libvirt kni", "sudo systemctl start firewalld", "sudo systemctl enable firewalld", "sudo firewall-cmd --zone=public --add-service=http --permanent", "sudo firewall-cmd --reload", "sudo systemctl enable libvirtd --now", "PRVN_HOST_ID=<ID>", "ibmcloud sl hardware list", "PUBLICSUBNETID=<ID>", "ibmcloud sl subnet list", "PRIVSUBNETID=<ID>", "ibmcloud sl subnet list", "PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)", "PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)", "PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR", "PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)", "PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)", "PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)", "PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR", "PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)", "sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"", "ssh kni@provisioner.<cluster-name>.<domain>", "sudo nmcli con show", "NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2", "vim pull-secret.txt", "sudo dnf install dnsmasq", "sudo vi /etc/dnsmasq.conf", "interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r", "ibmcloud sl hardware list", "ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null", "\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"", "sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile", "00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1", "sudo systemctl start dnsmasq", "sudo systemctl enable dnsmasq", "sudo systemctl status dnsmasq", "● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k", "sudo firewall-cmd --add-port 53/udp --permanent", "sudo firewall-cmd --add-port 67/udp --permanent", "sudo firewall-cmd --change-zone=provisioning --zone=external --permanent", "sudo firewall-cmd --reload", "export VERSION=stable-4.9 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')", "export cmd=openshift-baremetal-install export pullsecret_file=~/pull-secret.txt export extract_dir=USD(pwd)", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE} sudo cp openshift-baremetal-install /usr/local/bin", "apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'", "ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'", "mkdir ~/clusterconfigs", "cp install-config.yaml ~/clusterconfig", "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "metadata: name:", "networking: machineNetwork: - cidr:", "compute: - name: worker", "compute: replicas: 2", "controlPlane: name: master", "controlPlane: replicas: 3", "- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc login -u <username>", "oc debug node/<nodename>", "chroot /host toolbox", "dbginfo.sh", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc login -u <username>", "oc debug node/<nodename>", "chroot /host toolbox", "dbginfo.sh", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional", "virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc login -u <username>", "oc debug node/<nodename>", "chroot /host toolbox", "dbginfo.sh", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional", "virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc login -u <username>", "oc debug node/<nodename>", "chroot /host toolbox", "dbginfo.sh", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "openstack role add --user <user> --project <project> swiftoperator", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 #", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3", "./openshift-install wait-for install-complete --log-level debug", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>", "(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml", "- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787", "(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose", "(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml", "openstack project show <project>", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+", "source stackrc # Undercloud credentials", "openstack server list", "+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+", "ssh [email protected]", "List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID", "controller-0USD sudo docker restart octavia_worker", "openstack loadbalancer provider list", "+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+", "openstack role add --user <user> --project <project> swiftoperator", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 #", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "ip route add <cluster_network_cidr> via <installer_subnet_gateway>", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-network-03-config.yml 1", "ls <installation_directory>/manifests/cluster-network-*", "cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5", "openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "openstack role add --user <user> --project <project> swiftoperator", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target", "oc apply -f <machine_config_file_name>.yaml", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK", "oc apply -f <machine_config_file_name>.yaml", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "openshift-install --log-level debug wait-for install-complete", "sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>", "(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml", "- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787", "(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose", "(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml", "openstack project show <project>", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+", "source stackrc # Undercloud credentials", "openstack server list", "+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+", "ssh [email protected]", "List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID", "controller-0USD sudo docker restart octavia_worker", "openstack loadbalancer provider list", "+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-network-03-config.yml 1", "ls <installation_directory>/manifests/cluster-network-*", "cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "openshift-install --log-level debug wait-for install-complete", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.9/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'", "./openshift-install wait-for install-complete --log-level debug", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink", ". If this value is non-empty, the corresponding floating IP will be attached to the bootstrap machine. This is needed for collecting logs in case of install failure. os_bootstrap_fip: '203.0.113.20' additionalNetworks: - id: radio count: 4 1 type: direct port_security_enabled: no - id: uplink count: 4 2 type: direct port_security_enabled: no", "- import_playbook: common.yaml - hosts: all gather_facts: no vars: worker_list: [] port_name_list: [] nic_list: [] tasks: # Create the SDN/primary port for each worker node - name: 'Create the Compute ports' os_port: name: \"{{ item.1 }}-{{ item.0 }}\" network: \"{{ os_network }}\" security_groups: - \"{{ os_sg_worker }}\" allowed_address_pairs: - ip_address: \"{{ os_ingressVIP }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" register: ports # Tag each SDN/primary port with cluster name - name: 'Set Compute ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" - name: 'List the Compute Trunks' command: cmd: \"openstack network trunk list\" when: os_networking_type == \"Kuryr\" register: compute_trunks - name: 'Create the Compute trunks' command: cmd: \"openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}\" with_indexed_items: \"{{ ports.results }}\" when: - os_networking_type == \"Kuryr\" - \"os_compute_trunk_name|string not in compute_trunks.stdout\" - name: 'Call additional-port processing' include_tasks: additional-ports.yaml # Create additional ports in OpenStack - name: 'Create additionalNetworks ports' os_port: name: \"{{ item.0 }}-{{ item.1.name }}\" vnic_type: \"{{ item.1.type }}\" network: \"{{ item.1.uuid }}\" port_security_enabled: \"{{ item.1.port_security_enabled|default(omit) }}\" no_security_groups: \"{{ 'true' if item.1.security_groups is not defined else omit }}\" security_groups: \"{{ item.1.security_groups | default(omit) }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Tag the ports with the cluster info - name: 'Set additionalNetworks ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Build the nic list to use for server create - name: Build nic list set_fact: nic_list: \"{{ nic_list | default([]) + [ item.name ] }}\" with_items: \"{{ port_name_list }}\" # Create the servers - name: 'Create the Compute servers' vars: worker_nics: \"{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\\\1') | list }}\" os_server: name: \"{{ item.1 }}\" image: \"{{ os_image_rhcos }}\" flavor: \"{{ os_flavor_worker }}\" auto_ip: no userdata: \"{{ lookup('file', 'worker.ign') | string }}\" security_groups: [] nics: \"{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}\" config_drive: yes with_indexed_items: \"{{ worker_list }}\"", "Build a list of worker nodes with indexes - name: 'Build worker list' set_fact: worker_list: \"{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}\" with_indexed_items: \"{{ [ os_compute_server_name ] * os_compute_nodes_number }}\" Ensure that each network specified in additionalNetworks exists - name: 'Verify additionalNetworks' os_networks_info: name: \"{{ item.id }}\" with_items: \"{{ additionalNetworks }}\" register: network_info Expand additionalNetworks by the count parameter in each network definition - name: 'Build port and port index list for additionalNetworks' set_fact: port_list: \"{{ port_list | default([]) + [ { 'net_name' : item.1.id, 'uuid' : network_info.results[item.0].openstack_networks[0].id, 'type' : item.1.type|default('normal'), 'security_groups' : item.1.security_groups|default(omit), 'port_security_enabled' : item.1.port_security_enabled|default(omit) } ] * item.1.count|default(1) }}\" index_list: \"{{ index_list | default([]) + range(item.1.count|default(1)) | list }}\" with_indexed_items: \"{{ additionalNetworks }}\" Calculate and save the name of the port The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count i.e. fdp-nz995-worker-1-99bcd111-1 - name: 'Calculate port name' set_fact: port_name_list: \"{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}\" with_indexed_items: \"{{ port_list }}\" when: port_list is defined", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "openshift-install --log-level debug wait-for install-complete", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target", "oc apply -f <machine_config_file_name>.yaml", "kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK", "oc apply -f <machine_config_file_name>.yaml", "openstack role add --user <user> --project <project> swiftoperator", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 #", "oc edit configmap -n openshift-config cloud-provider-config", "file <name_of_downloaded_file>", "openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}", "./openshift-install create install-config --dir <installation_directory> 1", "platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "ip route add <cluster_network_cidr> via <installer_subnet_gateway>", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk", "sudo alternatives --set python /usr/bin/python3", "ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "rhv-env.virtlab.example.com:443", "<username>@<profile> 1", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "https://<engine-fqdn>/ovirt-engine/api 1", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "<username>@<profile> 1", "ocpadmin@internal", "[ovirt_ca_bundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE-----", "[additionalTrustBundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE-----", "./openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: my-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 192.168.1.5 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 publish: External pullSecret: '{\"auths\": ...}' sshKey: ssh-ed12345 AAAA", "apiVersion: v1 baseDomain: example.com metadata: name: test-cluster platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed12345 AAAA", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: ovirt: cpu: cores: 4 sockets: 2 memoryMB: 65536 osDisk: sizeGB: 100 vmType: server replicas: 3 compute: - name: worker platform: ovirt: cpu: cores: 4 sockets: 4 memoryMB: 65536 osDisk: sizeGB: 200 vmType: server replicas: 5 metadata: name: test-cluster platform: ovirt: api_vip: 10.46.8.230 ingress_vip: 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "platform: ovirt: affinityGroups: - description: AffinityGroup to place each compute machine on a separate host enforcing: true name: compute priority: 3 - description: AffinityGroup to place each control plane machine on a separate host enforcing: true name: controlplane priority: 5 - description: AffinityGroup to place worker nodes and control plane nodes on separate hosts enforcing: false name: openshift priority: 5 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: - compute - openshift replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: ovirt: affinityGroupsNames: - controlplane - openshift replicas: 3", "platform: ovirt: affinityGroups: [] compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: [] replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: ovirt: affinityGroupsNames: [] replicas: 3", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "<machine-pool>: platform: ovirt: affinityGroupNames: - compute - clusterWideNonEnforcing", "<machine-pool>: platform: ovirt: affinityGroupNames: []", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "dnf update python3 ansible", "dnf install ovirt-ansible-image-template", "dnf install ovirt-ansible-vm-infra", "export ASSETS_DIR=./wrk", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir playbooks", "cd playbooks", "curl -s -L -X GET https://api.github.com/repos/openshift/installer/contents/upi/ovirt?ref=release-4.9 | grep 'download_url.*\\.yml' | awk '{ print USD2 }' | sed -r 's/(\"|\",)//g' | xargs -n 1 curl -O", "--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"", "--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata", "rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/latest/rhcos-openstack.x86_64.qcow2.gz\"", "openshift-install create install-config --dir USDASSETS_DIR", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir USDASSETS_DIR", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings", "tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml", "python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openshift-install create ignition-configs --dir USDASSETS_DIR", "tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml bootstrap.yml", "ssh core@<boostrap.ip>", "[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service", "ansible-playbook -i inventory.yml masters.yml", "openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR", "INFO API v1.22.1 up INFO Waiting up to 40m0s for bootstrapping to complete", "INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "ansible-playbook -i inventory.yml retire-bootstrap.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -A", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "watch \"oc get csr -A | grep pending -i\"", "Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr csr-m724n", "Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>", "oc adm certificate approve csr-m724n", "openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dnf update python3 ansible", "dnf install ovirt-ansible-image-template", "dnf install ovirt-ansible-vm-infra", "export ASSETS_DIR=./wrk", "curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem 1", "sudo chmod 0644 /tmp/ca.pem", "sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem", "sudo update-ca-trust", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir playbooks", "cd playbooks", "curl -s -L -X GET https://api.github.com/repos/openshift/installer/contents/upi/ovirt?ref=release-4.9 | grep 'download_url.*\\.yml' | awk '{ print USD2 }' | sed -r 's/(\"|\",)//g' | xargs -n 1 curl -O", "--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"", "--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata", "rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/latest/rhcos-openstack.x86_64.qcow2.gz\"", "openshift-install create install-config --dir USDASSETS_DIR", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir USDASSETS_DIR", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings", "tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml", "python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openshift-install create ignition-configs --dir USDASSETS_DIR", "tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml bootstrap.yml", "ssh core@<boostrap.ip>", "[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service", "ansible-playbook -i inventory.yml masters.yml", "openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR", "INFO API v1.22.1 up INFO Waiting up to 40m0s for bootstrapping to complete", "INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "ansible-playbook -i inventory.yml retire-bootstrap.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -A", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "watch \"oc get csr -A | grep pending -i\"", "Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr csr-m724n", "Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>", "oc adm certificate approve csr-m724n", "openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ansible-playbook -i inventory.yml retire-bootstrap.yml retire-masters.yml retire-workers.yml", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator", "oc wait pods -l name=vsphere-problem-detector-operator --for=delete --timeout=5m -n openshift-cluster-storage-operator", "oc scale deployment/vsphere-problem-detector-operator --replicas=1 -n openshift-cluster-storage-operator", "oc delete -n openshift-cluster-storage-operator cm vsphere-problem-detector-lock", "oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}", "16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader", "oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator", "I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed", "oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID", "/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.9/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "cp rhcos-<version>-live.x86_64.iso bootstrap.ign /mnt/ chmod 644 /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition embed -i /mnt/bootstrap.ign /mnt/rhcos-<version>-live.x86_64.iso", "./coreos-installer iso ignition show /mnt/rhcos-<version>-live.x86_64.iso > mybootstrap.ign", "diff -s bootstrap.ign mybootstrap.ign", "Files bootstrap.ign and mybootstrap.ign are identical", "./coreos-installer iso ignition remove /mnt/rhcos-<version>-live.x86_64.iso", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "# bootupctl status", "Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version", "# bootupctl adopt-and-update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "# bootupctl update", "Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64", "variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane", "curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane", "chmod +x butane", "echo USDPATH", "butane <butane_file>", "variant: openshift version: 4.9.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-custom.bu -o ./99-worker-custom.yaml", "oc create -f 99-worker-custom.yaml", "./openshift-install create manifests --dir <installation_directory>", "cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "cd kmods-via-containers/", "sudo make install", "sudo systemctl daemon-reload", "cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "cd kvc-simple-kmod", "cat simple-kmod.conf", "KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"", "sudo make install", "sudo kmods-via-containers build simple-kmod USD(uname -r)", "sudo systemctl enable [email protected] --now", "sudo systemctl status [email protected]", "● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "dmesg | grep 'Hello world'", "[ 6420.761332] Hello world from simple_kmod.", "sudo cat /proc/simple-procfs-kmod", "simple-procfs-kmod number = 0", "sudo spkut 44", "KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44", "subscription-manager register", "subscription-manager attach --auto", "yum install podman make git -y", "mkdir kmods; cd kmods", "git clone https://github.com/kmods-via-containers/kmods-via-containers", "git clone https://github.com/kmods-via-containers/kvc-simple-kmod", "FAKEROOT=USD(mktemp -d)", "cd kmods-via-containers", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd ../kvc-simple-kmod", "make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/", "cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree", "variant: openshift version: 4.9.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true", "butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml", "oc create -f 99-simple-kmod.yaml", "lsmod | grep simple_", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "variant: openshift version: 4.9.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 luks: tpm2: true 1 tang: 2 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 3 openshift: fips: true", "sudo yum install clevis", "clevis-encrypt-tang '{\"url\":\"http://tang.example.com:7500\"}' < /dev/null > /dev/null 1", "The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1", "./openshift-install create manifests --dir <installation_directory> 1", "variant: openshift version: 4.9.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12", "butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml", "oc debug node/compute-1", "chroot /host", "cryptsetup status root", "/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write", "clevis luks list -d /dev/sda4 1", "1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1", "cat /proc/mdstat", "Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>", "mdadm --detail /dev/md126", "/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8", "mount | grep /dev/md", "/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)", "variant: openshift version: 4.9.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true", "variant: openshift version: 4.9.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true", "butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"6zYIx-ckbW3-4d2Ne-IWvDF\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.9.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.22.1 control-plane-1.example.com Ready master 41m v1.22.1 control-plane-2.example.com Ready master 45m v1.22.1 compute-2.example.com Ready worker 38m v1.22.1 compute-3.example.com Ready worker 33m v1.22.1 control-plane-3.example.com Ready master 41m v1.22.1", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address>\" 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "journalctl -b -f -u bootkube.service", "for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done", "tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log", "journalctl -b -f -u kubelet.service -u crio.service", "sudo tail -f /var/log/containers/*", "oc adm node-logs --role=master -u kubelet", "oc adm node-logs --role=master --path=openshift-apiserver", "cat ~/<installation_directory>/.openshift_install.log 1", "./openshift-install create cluster --dir <installation_directory> --log-level debug 1", "./openshift-install destroy cluster --dir <installation_directory> 1", "rm -rf <installation_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/installing/index
8.88. i2c-tools
8.88. i2c-tools 8.88.1. RHBA-2014:1520 - i2c-tools bug fix update Updated i2c-tools packages that fix one bug are now available for Red Hat Enterprise Linux 6. The i2c-tools packages contain a set of I2C tools for Linux: a bus probing tool, a chip dumper, register-level SMBus access helpers, EEPROM (Electrically Erasable Programmable Read-Only Memory) decoding scripts, EEPROM programming tools, and a python module for SMBus access. Note: EEPROM decoding scripts can render your system unusable. Make sure to use these tools wisely. This update fixes the following bug: Bug Fix BZ# 914728 The i2cdetect utility requires the i2c-dev module to be loaded so that it can detect devices present on a specified bus. Previously, this was not done automatically, and as a consequence, the user could be mistaken that no buses or devices existed when running i2cdetect. With this update, i2c-dev has been made automatically-loadable. Now, i2cdetect correctly scans an I2C bus for devices and outputs a table with detected devices as expected. Note: EEPROM decoding scripts can render your system unusable. Make sure to use these tools wisely. Users of i2c-tools are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/i2c-tools
Chapter 1. Single Sign-On with SAML v2 Deeper Dive
Chapter 1. Single Sign-On with SAML v2 Deeper Dive The basics of Single Sign-On and SAML are covered in the JBoss EAP Security Architecture guide . This section takes a deeper dive into the components involved in SAML v2 and Single Sign-On. 1.1. What is SAML v2? Security Assertion Markup Language, or SAML, is a data format and protocol that allows two parties, usually an identity provider and a service provider, to exchange authentication and authorization information. This information is exchanged in the form of SAML tokens that contain assertions, and are issued by Identity Providers to subjects for authenticating with Service Providers. The ability for subjects to use and reuse SAML tokens issued from an identity provider with multiple service providers allow SAML v2 to facilitate browser-based Single Sign-On. 1.1.1. Building Blocks The most important concept to keep in mind with SAML is that its all about passing security assertions between entities. SAML has several components it uses to accomplish this task. 1.1.1.1. Entities Entities are all parties involved in creating and passing assertions. SAML has the concept of three distinct entities: subject The subject , also referred to as the principal , which is the user in most cases, is requesting access to a resource on a service provider , which is secured by SAML. service provider The service provider , or SP , requires proof, as an assertion, of the subject 's identity, which it needs from the identity provider . identity provider The identity provider , or IDP , provides a set of assertions, in the form of a token about a subject , that can be used in authentication and authorization decisions by service providers . In summary, subjects get issued assertions, identity providers issue those assertions, and service providers use those assertions to authenticate and authorize subjects . 1.1.1.2. Security Assertions A security assertion is a set of statements issued by an identity provider about a subject. Service providers use these assertions to make access-control decisions about a subject. Statements can take the following forms: Authentication Authentication assertions assert that a subject successfully authenticated using a specified method at a specific point in time. An authentication context containing other information about the authenticated subject can also be specified in an authentication statement. Attribute Attribute assertions assert that a subject has certain attributes. Authorization Decision Authorization Decision assertions assert a response, accept or deny , to an authorization request for a subject on a resource. Example The statement This user logged in as Sarah at 9:30 using a username and password is an Authentication assertion. The statement Sarah is a member of the Managers group is an Attribute assertion. The statement Sarah is accepted to access the Employee Information resource is an Authorization Decision assertion. Assertions are packaged as SAML tokens and transported using SAML protocols. 1.1.1.3. Protocols A SAML protocol describes how assertions are packaged, usually in the form of a request and response, as well as the rules on the correct way to process them. These rules must be followed by both the producers and consumers of the requests and responses. A request can ask for specific, known assertions or query identity providers for authentication, attribute, or authorization decisions. The request and response messages, which include security assertions, are formatted in XML and adhere to a specified schema. 1.1.1.4. Bindings SAML bindings specify how SAML protocols map to other standard protocols used for transport and messaging. Some examples include: A SAML binding that maps to an HTTP redirect. A SAML binding that maps to an HTTP POST . A SAML binding that maps SAML requests/responses to SOAP requests and responses. 1.1.1.5. Profiles SAML profiles use assertions, protocols, and bindings to support specific use cases, such as Web Browser Single Sign-On, Single Logout, and Assertion Query. 1.2. How Does SAML v2 Work with Single Sign-On The basics of browser-based Single Sign-On with SAML v2 are covered in the JBoss EAP Security Architecture guide, specifically in the Browser-Based Single Sign-On Using SAML and Multiple Red Hat JBoss Enterprise Application Platform Instances and Multiple Applications Using Browser-Based Single Sign-On with SAML sections. This section gives a more in-depth explanation regarding the SAML profiles and bindings related to browser-based Single Sign-On with SAML v2. 1.2.1. Web Browser Single Sign-On Profile The Web Browser Single Sign-On profile specifies the way an IDP, SP, and principal, in the form of a browser agent, handle browser-based Single Sign-On. Both the SP and IDP have several bindings that each can be used in the Web Browser Single Sign-On profile, allowing many possible flows. Additionally, this profile supports message flows initiated from either the IDP or SP. This profile also supports the IDP pushing the SAML assertion to the SP, or the SP pulling the assertion from the IDP. Flows initiated from either the SP or IDP are explained at a high level in the JBoss EAP Security Architecture guide . SAML assertions pushed from the IDP utilize HTTP POST messages or HTTP redirects. SAML assertions that are pulled by SPs involve sending an artifact to the receiver, which is then dereferenced to obtain the assertions. The basic flow of the Web Browser Single Sign-On profile is as follows: Principal HTTP request to SP. The principal first attempts to access a secured resource at the SP using an HTTP User Agent, for example a browser. If the principal has already been issued a SAML token with a valid security context, the SP will allow or decline the principal. This is the last step. Otherwise, the SP will attempt to locate the IDP for the authentication request. SP determines IDP. The SP locates the IDP and its endpoint that supports the SP's preferred binding. This allows the SP to send an authentication request to the IDP. The specific means of this process can vary between implementations. Authentication Request issued from SP to IDP using the principal. Once the SP determines the IDP location and endpoint, the SP issues an Authentication Request in the form of an <AuthnRequest> message, which will be delivered by the user agent, principal to the IDP. The HTTP Redirect, HTTP POST , or HTTP Artifact SAML bindings can be used to transfer the message to the IDP using the user agent. IDP identifies principal. Once the Authentication Request is delivered to the IDP by the principal, the principal will be identified by the IDP. The identification method is not specifically defined by the Web Browser Single Sign-On profile and may be accomplished in a number of ways, for example authentication using FORM , using existing session information, kerberos authentication, etc. IDP issues Response to SP. Once the principal is identified, the IDP issues a Response in the form of a <Response> message, to be delivered back to the SP for granting or declining access by the principal using the user agent. This message will contain at least one authentication assertion and can also be used to indicate errors. HTTP POST or HTTP Artifacts can be used to transfer this message, but HTTP Redirect cannot be used due to URL length constraints with most user agents. If the user agent initiated an IDP-based flow, for example by attempting to access the IDP directly instead of an SP, the process would begin at this step. If successful, the HTTP POST or HTTP Artifact will be sent to a location, which is pre-configured in the IDP. SP allows or declines access to principal. Once the SP receives the Response, it may grant access for the requested resource to the principal by creating a security context, or it may deny access, or do its own error handling. Note JBoss EAP does not support the SAML artifact binding. HTTP Redirect vs. POST Bindings HTTP Redirect bindings make use of HTTP GET requests and the URL query parameters to transmit protocol messages. Messages sent in this manner are also URL and Base-64 encoded before being sent and decoded by the receiver. HTTP POST bindings send messages using form data, and also do a base-64 encode/decode on the message. Both SPs and IDPs can transmit and receive messages using redirect or POST bindings. Due to the limitation of URL lengths in certain scenarios, HTTP Redirect is usually used when passing short messages, and HTTP POST is used when passing longer messages. 1.2.2. Global Logout Profile The Global Logout Profile allows a principal, who has authenticated with a set of IDPs and SPs, to log out and have that assertion be propagated to one or more associated IDPs and SPs. When a principal authenticates with an IDP, the principal and IDP have established an authentication session. The IDP may then issue assertions to various SPs, or relying parties, based on that authentication. From there if the principal attempts to access a secured resource within those SPs, the SPs may choose to establish additional sessions with the principal based on that assertion issued from the IDP, hence relying on the IDP. Once a session or set of sessions is created, a principal might be logged out of sessions individually using various means, or they can use the Global Logout Profile to logout of all sessions and from all SPs and IDPs at once. The Global Logout Profile can use the HTTP Redirect, HTTP POST or HTTP Artifact bindings in its flow. It can also use SOAP binding in certain cases which are not in the scope of this document. Note Single Logout Profile can be used as a synonym to Global Logout Profile. Note JBoss EAP does not support the SAML artifact binding. As with the Web Browser Single Sign-On profile flow, the Global Logout Profile flow may be initiated either at the IDP or the SP. The basic flow of the Global Logout Profile is as follows: Logout issued to IDP by Session Participant. A session participant, such as Service Providers or other relying parties, terminates its own session with the principal and sends a Logout Request, in the form of a <LogoutRequest> message, to the IDP that initially issued the security assertion for the principal. This request can be sent directly between the IDP and relying party, or indirectly by using the principal's user agent as a pass through. IDP identifies Session Participant. Once the IDP receives the Logout Request, it uses that request to determine what sessions to terminate with which relying parties, including any sessions the IDP owns as a session authority or session participant. For each session, the IDP issues a Logout Request to the relying party and waits for a Logout Response from each party before issuing a new Logout Response back to the original session participant. In cases where the Global Logout Profile flow was initiated at the IDP, the flow begins at this step, and some other mechanism is used to determine the sessions and SPs. Logout issued by IDP. Once the IDP determines all of the sessions and associated relying parties, it sends a Logout Request, in the form of a <LogoutRequest> message, to each relying party and awaits a Logout Response. These requests may be sent directly between the IDP and the relying parties, or indirectly through the principal's user agent. Logout response issued by Session Participant or Authority. Each relying party, including the IDP itself in some cases, attempts to terminate the session as directed by the IDP in the Logout Request, and returns a Logout Response in the form of a <LogoutResponse> message, back to the IDP. As with the Logout Request, the response can be issued directly between the relying party and the IDP, or indirectly through the principal's user agent. IDP issues Logout response to original Session Participant. Once all the Logout Responses has been received from the relying parties, the IDP sends a new Logout Response, in the form of a <LogoutResponse> message, back to original session participant who requested the logout. As with the other parts of this flow, this response can be passed directly between the IDP and the session participant, or indirectly through the principal's user agent. In cases where the Logout Request was initiated at the IDP, this step is omitted. Note The direct communication between the IDP and SP portion of the Global Logout Profile is not supported in JBoss EAP. 1.2.3. Multiple IDPs and the Identity Discovery Profile Browser-based Single Sign-On using SAML v2 also supports having multiple IDPs, and can be used in both the Web Browser Single Sign-On profile as well as the Global Logout profile. In cases where multiple IDPs are configured, the Identity Discovery SAML profile is used to determine which IDP a principal uses. This is accomplished by reading and writing cookies with domain information and a list of IDPs. 1.3. Further Reading For full details on the SAML v2, see the official SAML 2.0 specification .
[ "This user logged in as Sarah at 9:30 using a username and password. Sarah is a member of the Managers group. Sarah is accepted to access the Employee Information resource." ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_set_up_sso_with_saml_v2/single_sign_on_with_saml_v2_deeper_dive
Introduction
Introduction 1. About This Guide This book describes how to use Global Network Block Device (GNDB) with Global File System (GFS), including information about device-mapper multipath, GNDB driver and command usage, and running GFS on a GNBD server node.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/ch_introduction-gnbd
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights_for_openshift/1-latest/html/assessing_security_vulnerabilities_in_your_openshift_cluster_using_red_hat_insights/proc-providing-feedback-on-redhat-documentation
20.3. Configuring Network Encryption for an existing Trusted Storage Pool
20.3. Configuring Network Encryption for an existing Trusted Storage Pool Follow this section to configure I/O and management encryption for an existing Red Hat Gluster Storage Trusted Storage Pool. 20.3.1. Enabling I/O Encryption Follow this section to enable I/O encryption between servers and clients. Procedure 20.6. Enabling I/O encryption Unmount the volume from all clients Unmount the volume by running the following command on all clients. Stop the volume Stop the volume by running the following command from any server. Specify servers and clients to allow Provide a list of the common names of servers and clients that are allowed to access the volume. The common names provided must be exactly the same as the common name specified when you created the glusterfs.pem file for that server or client. This provides an additional check in case you want to leave keys in place, but temporarily restrict a client or server by removing it from this list, as shown in Section 20.7, "Deauthorizing a Client" . You can also use the default value of * , which indicates that any TLS authenticated machine can mount and access the volume. Enable TLS/SSL encryption on the volume Run the following command from any server to enable TLS/SSL encryption. Start the volume Verify Verify that the volume can be mounted only on authorized clients. The process for mounting a volume depends on the protocol your client is using. The following command mounts a volume using the native FUSE protocol. Ensure that this command works on authorized clients, and does not work on unauthorized clients.
[ "umount mountpoint", "gluster volume stop VOLNAME", "gluster volume set volname auth.ssl-allow ' server1 , server2 , client1 , client2 , client3 '", "gluster volume set volname client.ssl on gluster volume set volname server.ssl on", "gluster volume start volname", "mount -t glusterfs server1:/testvolume /mnt/glusterfs" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/ch20s03
Client Configuration Guide for Red Hat Insights
Client Configuration Guide for Red Hat Insights Red Hat Insights 1-latest Configuration options and use cases for the Insights client Red Hat Customer Content Services
[ "yum install insights-client", "insights-client --register", "yum install insights-client", "dnf install insights-client", "yum install insights-client", "yum install insights-client", "dnf install insights-client", "subscription-manager register --activationkey=_activation_key_name_ --org=_organization_ID_", "rhc connect --activation-key example_key --organization your_org_ID", "subscription-manager register subscription-manager status", "insights-client --register insights-client --status", "insights-client --register --display-name ITC-4", "Successfully registered host insights01-rhel9 as ITC-4 in group None Automatic scheduling for Insights has been enabled. Starting to collect Insights data for ITC-4 Writing RHSM facts to /etc/rhsm/facts/insights-client.facts Uploading Insights data. Successfully uploaded report from ITC-4 to account 1234567. View the Red Hat Insights console at https://console.redhat.com/insights/", "insights-client --display-name ITC-5", "System display name changed from ITC-4 to ITC-5", "insights-client --unregister Successfully unregistered from the Red Hat Insights Service", "insights-client --status System is NOT registered locally via .registered file. Unregistered at 2021-03-12T10:36:39.257300 Insights API says this machine was unregistered at 2021-03-12T00:36:39.000Z", "insights-client --unregister", "insights-client --register", "Successfully uploaded report for <machine name> View the Red Hat Insights console at https://console.redhat.com/insights/", "insights-client --display-name ITC-4 System display name changed from None to ITC-4", "insights-client --display-name \"ITC-4 B9 4th floor\" System display name changed from None to ITC-4 B9 4th floor", "insights-client --version Client: 3.0.6-0 Core: 3.0.121-1", "Obfuscate IP addresses #obfuscate=False", "obfuscate=True", "192.168.0.24", "10.230.230.1", "#obfuscate_hostname=False", "obfuscate_hostname=True", "display_name=example-display-name", "insights-client --display-name ITC-4", "RTP.data.center.01", "90f4a9365ce0.example.com", "Location of the redaction file for commands, files, and components #redaction_file=/etc/insights-client/file-redaction.yaml Location of the redaction file for patterns and keywords #content_redaction_file=/etc/insights-client/file-content-redaction.yaml", "files: commands:", "files: - /etc/audit/auditd.conf", "commands: - ethtool_i", "ll file-redaction.yaml -rw-------. 1 root root 145 Sep 25 17:39 file-redaction.yaml", "file-redaction.yaml --- Redact the entire output of commands Specify commands by either full command or by the \"symbolic_name\" like \"ethtool_i.\" Refer to the \"Datasource Catalog\" and \"General Datasources\" at https://insights-core.readthedocs.io/en/latest/specs_catalog.html#general-datasource for a full list of available symbolic_names, and the commands and files they correspond to. commands: - /bin/rpm -qa - /bin/ls - ethtool_i Redact the entire output of files Specify files either by full filename or by the \"symbolic_name\" for example, \"cluster_conf.\" Refer to the \"Datasource Catalog\" and \"General Datasources\" at https://insights-core.readthedocs.io/en/latest/specs_catalog.html#general-datasource for a full list of available symbolic_names, and the commands and files they correspond to. files: - /etc/audit/auditd.conf - cluster_conf", "insights-client --no-upload", "WARNING: Excluding data from files Starting to collect Insights data for I-HOST WARNING: Skipping command /bin/dmesg WARNING: Skipping file /etc/cluster/cluster.conf Archive saved at /var/tmp/qsINM9/insights-ITC-4-20190925180232.tar.gz", "file-content-redaction.yaml --- Pattern redaction per matching line Lines that match a pattern are excluded from files and command output. Patterns are processed in the order that they are listed. Example patterns: - \"a_string_1\" - \"a_string_2\" Regular expression pattern redaction per line Use \"regex:\" to wrap patterns with regular expressions\" Example patterns: regex: - \"abc.*def\" - \"localhost[[:digit:]]\" Keyword replacement redaction Replace keywords in files and command output with generic identifiers Keyword does not support regex Example keywords: - \"1.1.1.1\" - \"My Name\" - \"a_name\"", "ll file-content-redaction.yaml -rw-------. 1 root root 145 Sep 25 17:39 file-content-redaction.yaml", "insights-client --no-upload", "WARNING: Excluding data from files Starting to collect Insights data for ITC-4 WARNING: Skipping patterns found in remove.conf WARNING: Skipping command /bin/dmesg WARNING: Skipping command /bin/hostname WARNING: Skipping file /etc/cluster/cluster.conf WARNING: Skipping file /etc/hosts Archive saved at /var/tmp/qsINM9/insights-ITC-4-20190925180232.tar.gz", "cd /var/tmp/qsINM9/", "tar -xzf insights-ITC-4-20190925180232.tar.gz", "insights-client --keep-archive", "Starting to collect Insights data for ITC-4 Uploading Insights data. Successfully uploaded report from ITC-4 to account 6229994. Insights archive retained in /var/tmp/ozM8bY/insights-ITC-4-20190925181622.tar.gz", "cd /var/tmp/ozM8bY/", "tar -xzf insights-ITC-4-20190925181622.tar.gz", "insights-client --group=<name-you-choose>", "vim /etc/insights-client/tags.yaml", "tags --- group: _group-name-value_ location: _location-name-value_ description: - RHEL8 - SAP key 4: value", "insights-client", "vi /etc/insights-client/tags.yaml", "cat /etc/insights-client/tags.yaml group: redhat location: Brisbane/Australia description: - RHEL8 - SAP security: strict network_performance: latency", "insights-client", "insights-client --version Client: 3.0.6-0 Core: 3.0.121-1", "insights-client --disable-schedule", "insights-client --version Client: 3.0.6-0 Core: 3.0.121-1", "insights-client --enable-schedule", "systemctl edit insights-client.timer", "[Timer] OnCalendar=daily RandomizedDelaySec=14400", "insights-client --enable-schedule", "#auto_update=True", "auto_update=False", "auto_update=False", "auto_update=True", "insights-client --support", "Collecting logs Insights version: insights-core-3.0.121-1 Registration check: status: True unreachable: False . . . . Copying Insights logs to archive Support information collected in /var/tmp/H_Y43a/insights-client-logs-20190927144011.tar.gz", "cd /var/tmp/H_Y43a", "tar -xzf insights-client-logs-20190927144011.tar.gz" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html-single/client_configuration_guide_for_red_hat_insights/index
7.53. fence-agents
7.53. fence-agents 7.53.1. RHBA-2015:1350 - fence-agents bug fix and enhancement update Updated fence-agents packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The fence-agents packages provide a collection of scripts for handling remote power management for cluster devices. They allow failed or unreachable nodes to be forcibly restarted and removed from the cluster. Note The fence-agents packages have been upgraded to upstream version 4.0.15, which provides a number of bug fixes and enhancements over the version. Bug Fix BZ# 1049805 , BZ# 1094515 , BZ# 1099551 , BZ# 1111482 , BZ# 1118008 , BZ# 1123897 , BZ# 1171734 This update adds the "--tls1.0" option to the fence agent for HP Integrated Lights-Out 2 (iLO2) devices. With this option, iLO2 negotiation of the TLS protocol works as expected when using an iLO2 device with firmware version 2.27. The fence_kdump agent now supports the "monitor" action, making integration with a cluster stack easier. The fence-agents packages now support the fence_ilo_moonshot fence agent for HP Moonshot iLO devices. For information on the fence_ilo_moonshot parameters, see the fence_ilo_moonshot(8) man page. This update adds support for the fence_ilo_ssh fence agent. The agent logs into an iLO device using SSH and reboots a specified outlet. For information on the fence_ilo_ssh parameters, see the fence_ilo_ssh(8) man page. This update adds support for the fence_mpath fence agent. This agent is an I/O fencing agent that uses SCSI-3 persistent reservations to control access to multipath devices. For information on fence_mpath and its parameters, see the fence_mpath(8) man page. The fence agent for APC devices over Simple Network Management Protocol (SNMP) has been updated to support the latest versions of the APC firmware. This update adds support for the fence_emerson fencing agent for Emerson devices over Simple Network Management Protocol (SNMP). It is an I/O fencing agent that can be used with the MPX and MPH2 Emerson devices. For information on the parameters for the fence_emerson fencing agent, see the fence_emerson(8) man page. Users of fence-agents are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-fence-agents
Chapter 19. Maven Indexer Plugin
Chapter 19. Maven Indexer Plugin The Maven Indexer Plugin is required for the Maven plugin to enable it to quickly search Maven Central for artifacts. To Deploy the Maven Indexer plugin use the following commands: Prerequisites Before deploying the Maven Indexer Plugin, make sure that you have followed the instructions in the Installing on Apache Karaf Preparing to Use Maven section. Deploy the Maven Indexer Plugin Go to the Karaf console and enter the following command to install the Maven Indexer plugin: Enter the following commands to configure the Maven Indexer plugin: Wait for the Maven Indexer plugin to be deployed. This may take a few minutes. Look out for messages like those shown below to appear on the log tab. When the Maven Indexer plugin has been deployed, use the following commands to add further external Maven repositories to the Maven Indexer plugin configuration:
[ "features:install hawtio-maven-indexer", "config:edit io.hawt.maven.indexer config:proplist config:propset repositories 'https://maven.oracle.com' config:proplist config:update", "config:edit io.hawt.maven.indexer config:proplist config:propset repositories external repository config:proplist config:update" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/fesbfabricmavenindexer
Chapter 3. Preparing Storage for Red Hat Virtualization
Chapter 3. Preparing Storage for Red Hat Virtualization You need to prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Prerequisites Self-hosted engines must have an additional data domain with at least 74 GiB dedicated to the Manager virtual machine. The self-hosted engine installer creates this domain. Prepare the storage for this domain before installation. Warning Extending or otherwise changing the self-hosted engine storage domain after deployment of the self-hosted engine is not supported. Any such change might prevent the self-hosted engine from booting. When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine. If you use iSCSI storage, the self-hosted engine storage domain must use a dedicated iSCSI target. Any additional storage domains must use a different iSCSI target. It is strongly recommended to create additional data storage domains in the same data center as the self-hosted engine storage domain. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you cannot add new storage domains or remove the corrupted storage domain. You must redeploy the self-hosted engine. 3.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Prerequisites Install the NFS utils package. # dnf install nfs-utils -y To check the enabled versions: # cat /proc/fs/nfsd/versions Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure Create the group kvm : # groupadd kvm -g 36 Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm Create the storage directory and modify the access rights. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world> Note If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 3.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8. Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 3.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 3.4. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 3.5. Customizing Multipath Configurations for SAN Vendors If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf . To override the multipath settings, do not customize /etc/multipath.conf . Because VDSM owns /etc/multipath.conf , installing or upgrading VDSM or Red Hat Virtualization can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures. Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override. VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf . To avoid causing severe storage failures, follow these guidelines: Do not modify /etc/multipath.conf . If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. Do not override the user_friendly_names and find_multipaths settings. For details, see Recommended Settings for Multipath.conf . Avoid overriding the no_path_retry and polling_interval settings unless a storage vendor specifically requires you to do so. For details, see Recommended Settings for Multipath.conf . Warning Not following these guidelines can cause catastrophic storage errors. Prerequisites VDSM is configured to use the multipath module. To verify this, enter: Procedure Create a new configuration file in the /etc/multipath/conf.d directory. Copy the individual setting you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/<my_device>.conf . Remove any comment marks, edit the setting values, and save your changes. Apply the new configuration settings by entering: Note Do not restart the multipathd service. Doing so generates errors in the VDSM logs. Verification steps Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections. Enable one connection at a time and verify that doing so makes the storage domain reachable. Additional resources Recommended Settings for Multipath.conf Red Hat Enterprise Linux DM Multipath Configuring iSCSI Multipathing How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? 3.6. Recommended Settings for Multipath.conf Do not override the following settings: user_friendly_names no Device names must be consistent across all hypervisors. For example, /dev/mapper/{WWID} . The default value of this setting, no , prevents the assignment of arbitrary and inconsistent device names such as /dev/mapper/mpath{N} on various hypervisors, which can lead to unpredictable system behavior. Warning Do not change this setting to user_friendly_names yes . User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no This setting controls whether RHVH tries to access devices through multipath only if more than one path is available. The current value, no , allows RHV to access devices through multipath even if only one path is available. Warning Do not override this setting. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
[ "dnf install nfs-utils -y", "cat /proc/fs/nfsd/versions", "systemctl enable nfs-server systemctl enable rpcbind", "groupadd kvm -g 36", "useradd vdsm -u 36 -g kvm", "mkdir /storage chmod 0755 /storage chown 36:36 /storage/", "vi /etc/exports cat /etc/exports /storage *(rw)", "systemctl restart rpcbind systemctl restart nfs-server", "exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } }", "vdsm-tool is-configured --module multipath", "systemctl reload multipathd" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/Preparing_Storage_for_RHV_SHE_cli_deploy
2.7. RHEA-2011:0579 - new package: hwloc
2.7. RHEA-2011:0579 - new package: hwloc A new hwloc package is now available for Red Hat Enterprise Linux 6. The hwloc package provides Portable Hardware Locality, which is a portable abstraction of the hierarchical topology of current architectures. This enhancement update adds the hwloc package to Red Hat Enterprise Linux 6. (BZ# 648593 ) All users who require hwloc are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/hwloc_new
13.2.2. Preparing a Driver Disc
13.2.2. Preparing a Driver Disc You can create a driver update disc on CD or DVD. 13.2.2.1. Creating a driver update disc on CD or DVD Important CD/DVD Creator is part of the GNOME desktop. If you use a different Linux desktop, or a different operating system altogether, you will need to use another piece of software to create the CD or DVD. The steps will be generally similar. Make sure that the software that you choose can create CDs or DVDs from image files. While this is true of most CD and DVD burning software, exceptions exist. Look for a button or menu entry labeled burn from image or similar. If your software lacks this feature, or you do not select it, the resulting disc will hold only the image file itself, instead of the contents of the image file. Use the desktop file manager to locate the ISO image file of the driver disc, supplied to you by Red Hat or your hardware vendor. Figure 13.2. A typical .iso file displayed in a file manager window Right-click on this file and choose Write to disc . You will see a window similar to the following: Figure 13.3. CD/DVD Creator's Write to Disc dialog Click the Write button. If a blank disc is not already in the drive, CD/DVD Creator will prompt you to insert one. After you burn a driver update disc CD or DVD, verify that the disc was created successfully by inserting it into your system and browsing to it using the file manager. You should see a single file named rhdd3 and a directory named rpms : Figure 13.4. Contents of a typical driver update disc on CD or DVD If you see only a single file ending in .iso , then you have not created the disc correctly and should try again. Ensure that you choose an option similar to burn from image if you use a Linux desktop other than GNOME or if you use a different operating system. Refer to Section 13.3.2, "Let the Installer Prompt You for a Driver Update" and Section 13.3.3, "Use a Boot Option to Specify a Driver Update Disk" to learn how to use the driver update disc during installation.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-Preparing_a_driver_update_disk-ppc
Chapter 10. availability
Chapter 10. availability This chapter describes the commands under the availability command. 10.1. availability zone list List availability zones and their status Usage: Table 10.1. Optional Arguments Value Summary -h, --help Show this help message and exit --compute List compute availability zones --network List network availability zones --volume List volume availability zones --long List additional fields in output Table 10.2. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 10.3. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 10.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack availability zone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--compute] [--network] [--volume] [--long]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/availability
Chapter 5. Creating Windows machine sets
Chapter 5. Creating Windows machine sets 5.1. Creating a Windows machine set on AWS You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image with the Docker-formatted container runtime add-on enabled. Use the following aws command to query valid AMI images: USD aws ec2 describe-images --region <aws region name> --filters "Name=name,Values=Windows_Server-2019*English*Full*Containers*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table Important Currently, the Docker-formatted container runtime is used in Windows nodes. Kubernetes is deprecating Docker as a container runtime; you can reference the Kubernetes documentation for more information in Docker deprecation . Containerd will be the new supported container runtime for Windows nodes in a future release of Kubernetes. 5.1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.10 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.10 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of machines. Machine sets are to machines as replica sets are to pods. If you need more machines or must scale them down, you change the replicas field on the machine set to meet your compute need. Warning Control plane machines cannot be managed by machine sets. The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the machine set API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the scaling policy so that you can scale up nodes but not scale them down. Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 5.1.2. Sample YAML for a Windows MachineSet object on AWS This sample YAML defines a Windows MachineSet object running on Amazon Web Services (AWS) that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api 1 3 5 10 13 14 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the infrastructure ID, worker label, and zone. 7 Configure the machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the AMI ID of a Windows image with a container runtime installed. You must use Windows Server 2019. 11 Specify the AWS zone, like us-east-1a . 12 Specify the AWS region, like us-east-1 . 16 Created by the WMCO when it is configuring the first Windows machine. After that, the windows-user-data is available for all subsequent machine sets to consume. 5.1.3. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 5.1.4. Additional resources Overview of machine management 5.2. Creating a Windows machine set on vSphere You can create a Windows MachineSet object to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a supported Windows Server as the operating system image with the Docker-formatted container runtime add-on enabled. Important Currently, the Docker-formatted container runtime is used in Windows nodes. Kubernetes is deprecating Docker as a container runtime; you can reference the Kubernetes documentation for more information on Docker deprecation . Containerd will be the new supported container runtime for Windows nodes in a future release of Kubernetes. 5.2.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.10 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.10 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of machines. Machine sets are to machines as replica sets are to pods. If you need more machines or must scale them down, you change the replicas field on the machine set to meet your compute need. Warning Control plane machines cannot be managed by machine sets. The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the machine set API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the scaling policy so that you can scale up nodes but not scale them down. Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 5.2.2. Preparing your vSphere environment for Windows container workloads You must prepare your vSphere environment for Windows container workloads by creating the vSphere Windows VM golden image and enabling communication with the internal API server for the WMCO. 5.2.2.1. Creating the vSphere Windows VM golden image Create a vSphere Windows virtual machine (VM) golden image. Prerequisites You have created a private/public key pair, which is used to configure key-based authentication in the OpenSSH server. The private key must also be configured in the Windows Machine Config Operator (WMCO) namespace. This is required to allow the WMCO to communicate with the Windows VM. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. Note You must use Microsoft PowerShell commands in several cases when creating your Windows VM. PowerShell commands in this guide are distinguished by the PS C:\> prefix. Procedure Create a new VM in the vSphere client using the Windows Server 2022 image that includes the Microsoft patch KB5012637 . Important The virtual hardware version for your VM must meet the infrastructure requirements for OpenShift Container Platform. For more information, see the "VMware vSphere infrastructure requirements" section in the OpenShift Container Platform documentation. Also, you can refer to VMware's documentation on virtual machine hardware versions . Install and configure VMware Tools version 11.0.6 or greater on the Windows VM. See the VMware Tools documentation for more information. After installing VMware Tools on the Windows VM, verify the following: The C:\ProgramData\VMware\VMware Tools\tools.conf file exists with the following entry: exclude-nics= If the tools.conf file does not exist, create it with the exclude-nics option uncommented and set as an empty value. This entry ensures the cloned vNIC generated on the Windows VM by the hybrid-overlay is not ignored. The Windows VM has a valid IP address in vCenter: C:\> ipconfig The VMTools Windows service is running: PS C:\> Get-Service -Name VMTools | Select Status, StartType Install and configure the OpenSSH Server on the Windows VM. See Microsoft's documentation on installing OpenSSH for more details. Set up SSH access for an administrative user. See Microsoft's documentation on the Administrative user to do this. Important The public key used in the instructions must correspond to the private key you create later in the WMCO namespace that holds your secret. See the "Configuring a secret for the Windows Machine Config Operator" section for more details. Install the docker container runtime on your Windows VM following the Microsoft documentation . You must create a new firewall rule in the Windows VM that allows incoming connections for container logs. Run the following PowerShell command to create the firewall rule on TCP port 10250: PS C:\> New-NetFirewallRule -DisplayName "ContainerLogsPort" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow Clone the Windows VM so it is a reusable image. Follow the VMware documentation on how to clone an existing virtual machine for more details. In the cloned Windows VM, run the Windows Sysprep tool : C:\> C:\Windows\System32\Sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1 1 Specify the path to your unattend.xml file. Note There is a limit on how many times you can run the sysprep command on a Windows image. Consult Microsoft's documentation for more information. An example unattend.xml is provided, which maintains all the changes needed for the WMCO. You must modify this example; it cannot be used directly. Example 5.1. Example unattend.xml <?xml version="1.0" encoding="UTF-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="specialize"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Security-SPP-UX" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-SQMApi" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass="oobeSystem"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend> 1 Specify the ComputerName , which must follow the Kubernetes' names specification . These specifications also apply to Guest OS customization performed on the resulting template while creating new VMs. 2 Disable the automatic logon to avoid the security issue of leaving an open terminal with Administrator privileges at boot. This is the default value and must not be changed. 3 Replace the MyPassword placeholder with the password for the Administrator account. This prevents the built-in Administrator account from having a blank password by default. Follow Microsoft's best practices for choosing a password . After the Sysprep tool has completed, the Windows VM will power off. You must not use or power on this VM anymore. Convert the Windows VM to a template in vCenter . 5.2.2.1.1. Additional resources Configuring a secret for the Windows Machine Config Operator VMware vSphere infrastructure requirements 5.2.2.2. Enabling communication with the internal API server for the WMCO on vSphere The Windows Machine Config Operator (WMCO) downloads the Ignition config files from the internal API server endpoint. You must enable communication with the internal API server so that your Windows virtual machine (VM) can download the Ignition config files, and the kubelet on the configured VM can only communicate with the internal API server. Prerequisites You have installed a cluster on vSphere. Procedure Add a new DNS entry for api-int.<cluster_name>.<base_domain> that points to the external API server URL api.<cluster_name>.<base_domain> . This can be a CNAME or an additional A record. Note The external API endpoint was already created as part of the initial cluster installation on vSphere. 5.2.3. Sample YAML for a Windows MachineSet object on vSphere This sample YAML defines a Windows MachineSet object running on VMware vSphere that the Windows Machine Config Operator (WMCO) can react upon. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 6 Specify the Windows machine set name. The machine set name cannot be more than 9 characters long, due to the way machine names are generated in vSphere. 7 Configure the machine set as a Windows machine. 8 Configure the Windows node as a compute machine. 9 Specify the size of the vSphere Virtual Machine Disk (VMDK). Note This parameter does not set the size of the Windows partition. You can resize the Windows partition by using the unattend.xml file or by creating the vSphere Windows virtual machine (VM) golden image with the required disk size. 10 Specify the vSphere VM network to deploy the machine set to. This VM network must be where other Linux compute machines reside in the cluster. 11 Specify the full path of the Windows vSphere VM template to use, such as golden-images/windows-server-template . The name must be unique. Important Do not specify the original VM template. The VM template must remain off and must be cloned for new Windows machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. 12 The windows-user-data is created by the WMCO when the first Windows machine is configured. After that, the windows-user-data is available for all subsequent machine sets to consume. 13 Specify the vCenter Datacenter to deploy the machine set on. 14 Specify the vCenter Datastore to deploy the machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Optional: Specify the vSphere resource pool for your Windows VMs. 17 Specify the vCenter server IP or fully qualified domain name. 5.2.4. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 5.2.5. Additional resources Overview of machine management
[ "aws ec2 describe-images --region <aws region name> --filters \"Name=name,Values=Windows_Server-2019*English*Full*Containers*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "exclude-nics=", "C:\\> ipconfig", "PS C:\\> Get-Service -Name VMTools | Select Status, StartType", "PS C:\\> New-NetFirewallRule -DisplayName \"ContainerLogsPort\" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow", "C:\\> C:\\Windows\\System32\\Sysprep\\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <unattend xmlns=\"urn:schemas-microsoft-com:unattend\"> <settings pass=\"specialize\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-International-Core\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Security-SPP-UX\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-SQMApi\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass=\"oobeSystem\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/windows_container_support_for_openshift/creating-windows-machine-sets
Chapter 5. Multi-Credential Assignment
Chapter 5. Multi-Credential Assignment Automation controller provides support for assigning zero or more credentials to a job template. 5.1. Background Before automation controller v3.3, job templates had the following requirements with respect to credentials: All job templates (and jobs) were required to have exactly one Machine/SSH or Vault credential (or one of both). All job templates (and jobs) could have zero or more "extra" credentials. Extra credentials represented "Cloud" and "Network" credentials that could be used to provide authentication to external services through environment variables, for example, AWS_ACCESS_KEY_ID . This model required a variety of disjoint interfaces for specifying credentials on a job template and it lacked the ability to associate multiple Vault credentials with a playbook run, a use case supported by Ansible core from Ansible 2.4 onwards. This model also poses a stumbling block for certain playbook execution workflows, such as having to attach a "dummy" Machine/SSH credential to the job template to satisfy the requirement. 5.2. Important Changes All automation controller 4.4 Job templates have a single interface for credential assignment. From the API endpoint: GET /api/v2/job_templates/N/credentials/ You can associate and disassociate credentials using POST requests, similar to the behavior in the deprecated extra_credentials endpoint: POST /api/v2/job_templates/N/credentials/ {'associate': true, 'id': 'X'} POST /api/v2/job_templates/N/credentials/ {'disassociate': true, 'id': 'Y'} With this model, a job template is considered valid even when there are no credentials assigned to it. This model also provides users the ability to assign multiple Vault credentials to a job template. 5.3. Launch Time Considerations Before automation controller v3.3, job templates used a configurable attribute, ask_credential_on_launch . This value was used at launch time to determine which missing credential values were necessary for launch. This was a way to specify a Machine or SSH credential to satisfy the minimum credential requirement. Under the unified credential list model, this attribute still exists, but it no longer requires a credential. Now when ask_credential_on_launch is true , it signifies that you can specify a list of credentials at launch time to override those defined on the job template. For example: POST /api/v2/job_templates/N/launch/ {'credentials': [A, B, C]}` If ask_credential_on_launch is false , it signifies that custom credentials provided in the POST /api/v2/job_templates/N/launch/ are ignored. Under this model, the only purpose for ask_credential_on_launch is to signal API clients to prompt the user for (optional) changes at launch time. 5.4. Multi-Vault Credentials Because you can assign multiple credentials to a job, you can specify multiple Vault credentials to decrypt when your job template runs. This functionality mirrors the support for Managing vault passwords . Vault credentials now have an optional field, vault_id , which is similar to the --vault-id argument of ansible-playbook . Use the following procedure to run a playbook which makes use of multiple vault passwords: Procedure Create a Vault credential in automation controller for each vault password. Specify the Vault ID as a field on the credential and input the password (which is encrypted and stored). Assign multiple vault credentials to the job template using the new credentials endpoint: POST /api/v2/job_templates/N/credentials/ { 'associate': true, 'id': X } Alternatively, you can perform the same assignment in the automation controller UI in the Create Credential page: In this example, the credential created specifies the secret to be used by its Vault Identifier ("first") and password pair. When this credential is used in a Job Template, as in the following example, it only decrypts the secret associated with the "first" Vault ID: If you have a playbook that is set up the traditional way with all the secrets in one big file without distinction, then leave the Vault Identifier field blank when setting up the Vault credential. 5.4.1. Prompted Vault Credentials For passwords for Vault credentials that are marked with Prompt on launch , the launch endpoint of any related Job Templates communicate necessary Vault passwords using the passwords_needed_to_start parameter: GET /api/v2/job_templates/N/launch/ { 'passwords_needed_to_start': [ 'vault_password.X', 'vault_password.Y', ] } Where X and Y are primary keys of the associated Vault credentials: POST /api/v2/job_templates/N/launch/ { 'credential_passwords': { 'vault_password.X': 'first-vault-password' 'vault_password.Y': 'second-vault-password' } } 5.4.2. Linked credentials Instead of uploading sensitive credential information into automation controller, you can link credential fields to external systems and use them to run your playbooks. For more information, see Secret Management System in the Automation controller User Guide.
[ "GET /api/v2/job_templates/N/credentials/", "POST /api/v2/job_templates/N/credentials/ {'associate': true, 'id': 'X'} POST /api/v2/job_templates/N/credentials/ {'disassociate': true, 'id': 'Y'}", "POST /api/v2/job_templates/N/launch/ {'credentials': [A, B, C]}`", "POST /api/v2/job_templates/N/credentials/ { 'associate': true, 'id': X }", "GET /api/v2/job_templates/N/launch/ { 'passwords_needed_to_start': [ 'vault_password.X', 'vault_password.Y', ] }", "POST /api/v2/job_templates/N/launch/ { 'credential_passwords': { 'vault_password.X': 'first-vault-password' 'vault_password.Y': 'second-vault-password' } }" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/assembly-multi-credential-assignment
function::mem_page_size
function::mem_page_size Name function::mem_page_size - Number of bytes in a page for this architecture Synopsis Arguments None
[ "function mem_page_size:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-mem-page-size
Chapter 1. Overview
Chapter 1. Overview AMQ C++ is a library for developing messaging applications. It enables you to write C++ applications that send and receive AMQP messages. AMQ C++ is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.9 Release Notes . AMQ C++ is based on the Proton API from Apache Qpid . For detailed API documentation, see the AMQ C++ API reference . 1.1. Key features An event-driven API that simplifies integration with existing applications SSL/TLS for secure communication Flexible SASL authentication Automatic reconnect and failover Seamless conversion between AMQP and language-native data types Access to all the features and capabilities of AMQP 1.0 1.2. Supported standards and protocols AMQ C++ supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms supported by Cyrus SASL , including ANONYMOUS, PLAIN, SCRAM, EXTERNAL, and GSSAPI (Kerberos) Modern TCP with IPv6 1.3. Supported configurations AMQ C++ supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 and 8 with GNU C++, compiling as C++11 Microsoft Windows 10 Pro with Microsoft Visual Studio 2015 or newer Microsoft Windows Server 2012 R2 and 2016 with Microsoft Visual Studio 2015 or newer AMQ C++ is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Container A top-level container of connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for sending and receiving messages. It contains senders and receivers. Sender A channel for sending messages to a target. It has a target. Receiver A channel for receiving messages from a source. It has a source. Source A named point of origin for messages. Target A named destination for messages. Message An application-specific piece of information. Delivery A message transfer. AMQ C++ sends and receives messages . Messages are transferred between connected peers over senders and receivers . Senders and receivers are established over sessions . Sessions are established over connections . Connections are established between two uniquely identified containers . Though a connection can have multiple sessions, often this is not needed. The API allows you to ignore sessions unless you require them. A sending peer creates a sender to send messages. The sender has a target that identifies a queue or topic at the remote peer. A receiving peer creates a receiver to receive messages. The receiver has a source that identifies a queue or topic at the remote peer. The sending of a message is called a delivery . The message is the content sent, including all metadata such as headers and annotations. The delivery is the protocol exchange associated with the transfer of that content. To indicate that a delivery is complete, either the sender or the receiver settles it. When the other side learns that it has been settled, it will no longer communicate about that delivery. The receiver can also indicate whether it accepts or rejects the message. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/overview
Chapter 29. Finding WSDL at Runtime
Chapter 29. Finding WSDL at Runtime Abstract Hard coding the location of WSDL documents into an application is not scalable. In real deployment environments, you will want to allow the WSDL document's location be resolved at runtime. Apache CXF provides a number of tools to make this possible. 29.1. Mechanisms for Locating the WSDL Document When developing consumers using the JAX-WS APIs you are must provide a hard coded path to the WSDL document that defines your service. While this is OK in a small environment, using hard coded paths does not work well in enterprise deployments. To address this issue, Apache CXF provides three mechanisms for removing the requirement of using hard coded paths: Section 29.2, "Instantiating a Proxy by Injection" Section 29.3, "Using a JAX-WS Catalog" Section 29.4, "Using a contract resolver" Note Injecting the proxy into your implementation code is generally the best option because it is the easiest to implement. It requires only a client endpoint and a configuration file for injecting and instantiating the service proxy. 29.2. Instantiating a Proxy by Injection Overview Apache CXF's use of the Spring Framework allows you to avoid the hassle of using the JAX-WS APIs to create service proxies. It allows you to define a client endpoint in a configuration file and then inject a proxy directly into the implementation code. When the runtime instantiates the implementation object, it will also instantiate a proxy for the external service based on the configuration. The implementation is handed by reference to the instantiated proxy. Because the proxy is instantiated using information in the configuration file, the WSDL location does not need to be hard coded. It can be changed at deployment time. You can also specify that the runtime should search the application's classpath for the WSDL. Procedure To inject a proxy for an external service into a service provider's implementation do the following: Deploy the required WSDL documents in a well known location that all parts of the application can access. Note If you are deploying the application as a WAR file, it is recommended that you place all of the WSDL documents and XML Schema documents in the WEB-INF/wsdl folder of the WAR. Note If you are deploying the application as a JAR file, it is recommended that you place all of the WSDL documents and XML Schema documents in the META-INF/wsdl folder of the JAR. Configure a JAX-WS client endpoint for the proxy that is being injected. Inject the proxy into your service provide using the @Resource annotation. Configuring the proxy You configure a JAX-WS client endpoint using the jaxws:client element in you application's configuration file. This tells the runtime to instantiate a org.apache.cxf.jaxws.JaxWsClientProxy object with the specified properties. This object is the proxy that will be injected into the service provider. At a minimum you need to provide values for the following attributes: id -Specifies the ID used to identify the client to be injected. serviceClass -Specifies the SEI of the service on which the proxy makes requests. Example 29.1, "Configuration for a Proxy to be Injected into a Service Implementation" shows the configuration for a JAX-WS client endpoint. Example 29.1. Configuration for a Proxy to be Injected into a Service Implementation Note In Example 29.1, "Configuration for a Proxy to be Injected into a Service Implementation" the wsdlLocation attribute instructs the runtime to load the WSDL from the classpath. If books.wsdl is on the classpath, the runtime will be able to find it. For more information on configuring a JAX-WS client see Section 17.2, "Configuring Consumer Endpoints" . Coding the provider implementation You inject the configured proxy into a service implementation as a resource using the @Resource as shown in Example 29.2, "Injecting a Proxy into a Service Implementation" . Example 29.2. Injecting a Proxy into a Service Implementation The annotation's name property corresponds to the value of the JAX-WS client's id attribute. The configured proxy is injected into the BookService object declared immediately after the annotation. You can use this object to make invocations on the proxy's external service. 29.3. Using a JAX-WS Catalog Overview The JAX-WS specification mandates that all implementations support: a standard catalog facility to be used when resolving any Web service document that is part of the description of a Web service, specifically WSDL and XML Schema documents. This catalog facility uses the XML catalog facility specified by OASIS. All of the JAX-WS APIs and annotation that take a WSDL URI use the catalog to resolve the WSDL document's location. This means that you can provide an XML catalog file that rewrites the locations of your WSDL documents to suite specific deployment environments. Writing the catalog JAX-WS catalogs are standard XML catalogs as defined by the OASIS XML Catalogs 1.1 specification. They allow you to specify mapping: a document's public identifier and/or a system identifier to a URI. the URI of a resource to another URI. Table 29.1, "Common JAX-WS Catalog Elements" lists some common elements used for WSDL location resolution. Table 29.1. Common JAX-WS Catalog Elements Element Description uri Maps a URI to an alternate URI. rewriteURI Rewrites the beginning of a URI. For example, this element allows you to map all URIs that start with http://cxf.apache.org to URIs that start with classpath: . uriSuffix Maps a URI to an alternate URI based on the suffix of the original URI. For example you could map all URIs that end in foo.xsd to classpath:foo.xsd . Packaging the catalog The JAX-WS specification mandates that the catalog used to resolve WSDL and XML Schema documents is assembled using all available resources named META-INF/jax-ws-catalog.xml . If your application is packaged into a single JAR, or WAR, you can place the catalog into a single file. If your application is packaged as multiple JARs, you can split the catalog into a number of files. Each catalog file could be modularized to only deal with WSDLs accessed by the code in the specific JARs. 29.4. Using a contract resolver Overview The most involved mechanism for resolving WSDL document locations at runtime is to implement your own custom contract resolver. This requires that you provide an implementation of the Apache CXF specific ServiceContractResolver interface. You also need to register your custom resolver with the bus. Once properly registered, the custom contract resolver will be used to resolve the location of any required WSDL and schema documents. Implementing the contract resolver A contract resolver is an implementation of the org.apache.cxf.endpoint.ServiceContractResolver interface. As shown in Example 29.3, "ServiceContractResolver Interface" , this interface has a single method, getContractLocation() , that needs to be implemented. getContractLocation() takes the QName of a service and returns the URI for the service's WSDL contract. Example 29.3. ServiceContractResolver Interface The logic used to resolve the WSDL contract's location is application specific. You can add logic that resolves contract locations from a UDDI registry, a database, a custom location on a file system, or any other mechanism you choose. Registering the contract resolver programmatically Before the Apache CXF runtime will use your contract resolver, you must register it with a contract resolver registry. Contract resolver registries implement the org.apache.cxf.endpoint.ServiceContractResolverRegistry interface. However, you do not need to implement your own registry. Apache CXF provides a default implementation in the org.apache.cxf.endpoint.ServiceContractResolverRegistryImpl class. To register a contract resolver with the default registry you do the following: Get a reference to the default bus object. Get the service contract registry from the bus using the bus' getExtension() method. Create an instance of your contract resolver. Register your contract resolver with the registry using the registry's register() method. Example 29.4, "Registering a Contract Resolver" shows the code for registering a contract resolver with the default registry. Example 29.4. Registering a Contract Resolver The code in Example 29.4, "Registering a Contract Resolver" does the following: Gets a bus instance. Gets the bus' contract resolver registry. Creates an instance of a contract resolver. Registers the contract resolver with the registry. Registering a contract resolver using configuration You can also implement a contract resolver so that it can be added to a client through configuration. The contract resolver is implemented in such a way that when the runtime reads the configuration and instantiates the resolver, the resolver registers itself. Because the runtime handles the initialization, you can decide at runtime if a client needs to use the contract resolver. To implement a contract resolver so that it can be added to a client through configuration do the following: Add an init() method to your contract resolver implementation. Add logic to your init() method that registers the contract resolver with the contract resolver registry as shown in Example 29.4, "Registering a Contract Resolver" . Decorate the init() method with the @PostConstruct annotation. Example 29.5, "Service Contract Resolver that can be Registered Using Configuration" shows a contract resolver implementation that can be added to a client using configuration. Example 29.5. Service Contract Resolver that can be Registered Using Configuration To register the contract resolver with a client you need to add a bean element to the client's configuration. The bean element's class attribute is the name of the class implementing the contract resolver. Example 29.6, "Bean Configuring a Contract Resolver" shows a bean for adding a configuration resolver implemented by the org.apache.cxf.demos.myContractResolver class. Example 29.6. Bean Configuring a Contract Resolver Contract resolution order When a new proxy is created, the runtime uses the contract registry resolver to locate the remote service's WSDL contract. The contract resolver registry calls each contract resolver's getContractLocation() method in the order in which the resolvers were registered. It returns the first URI returned from one of the registered contract resolvers. If you registered a contract resolver that attempted to resolve the WSDL contract at a well known shared file system, it would be the only contract resolver used. However, if you subsequently registered a contract resolver that resolved WSDL locations using a UDDI registry, the registry could use both resolvers to locate a service's WSDL contract. The registry would first attempt to locate the contract using the shared file system contract resolver. If that contract resolver failed, the registry would then attempt to locate it using the UDDI contract resolver.
[ "<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <jaxws:client id=\"bookClient\" serviceClass=\"org.apache.cxf.demo.BookService\" wsdlLocation=\"classpath:books.wsdl\"/> </beans>", "package demo.hw.server; import org.apache.hello_world_soap_http.Greeter; @javax.jws.WebService(portName = \"SoapPort\", serviceName = \"SOAPService\", targetNamespace = \"http://apache.org/hello_world_soap_http\", endpointInterface = \"org.apache.hello_world_soap_http.Greeter\") public class StoreImpl implements Store { @Resource(name=\"bookClient\") private BookService proxy; }", "public interface ServiceContractResolver { URI getContractLocation(QName qname); }", "BusFactory bf=BusFactory.newInstance(); Bus bus=bf.createBus(); ServiceContractResolverRegistry registry = bus.getExtension(ServiceContractResolverRegistry); JarServiceContractResolver resolver = new JarServiceContractResolver(); registry.register(resolver);", "import javax.annotation.PostConstruct; import javax.annotation.Resource; import javax.xml.namespace.QName; import org.apache.cxf.Bus; import org.apache.cxf.BusFactory; public class UddiResolver implements ServiceContractResolver { private Bus bus; @PostConstruct public void init() { BusFactory bf=BusFactory.newInstance(); Bus bus=bf.createBus(); if (null != bus) { ServiceContractResolverRegistry resolverRegistry = bus.getExtension(ServiceContractResolverRegistry.class); if (resolverRegistry != null) { resolverRegistry.register(this); } } } public URI getContractLocation(QName serviceName) { } }", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd\"> <bean id=\"myResolver\" class=\"org.apache.cxf.demos.myContractResolver\" /> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSWSDLBootstrap
2.4. Virtualization
2.4. Virtualization ovirt-node component, BZ# 747102 Upgrades from Beta to the GA version will result in an incorrect partitioning of the host. The GA version must be installed clean. UEFI machines must be set to legacy boot options for RHEV-H to boot successfully after installation. kernel component When a system boots from SAN, it starts the libvirtd service, which enables IP forwarding. The service causes a driver reset on both Ethernet ports which causes a loss of all paths to an OS disk. Under this condition, the system cannot load firmware files from the OS disk to initialize Ethernet ports, eventually never recovers paths to the OS disk, and fails to boot from SAN. To work around this issue add the bnx2x.disable_tpa=1 option to the kernel command line of the GRUB menu, or do not install virtualization related software and manually enable IP forwarding when needed. kernel component Booting Red Hat Enterprise Linux 6.2 as an HVM guest with more than one vCPU on machines that support SMEP and using Red Hat Enterprise Linux 5.7 and earlier Xen Hypervisors fails. To work around this issue, boot the guest with the nosmep kernel command line option. vdsm component If the /root/.ssh directory is missing from a host when it is added to a Red Hat Enterprise Virtualization Manager data center, the directory is created with a wrong SELinux context, and SSH'ing into the host is denied. To work around this issue, manually create the /root/.ssh directory with the correct SELinux context: vdsm component VDSM now configures libvirt so that connection to its local read-write UNIX domain socket is password-protected by SASL. The intention is to protect virtual machines from human errors of local host administrators. All operations that may change the state of virtual machines on a Red Hat Enterprise Virtualization-controlled host must be performed from Red Hat Enterprise Virtualization Manager. libvirt component In earlier versions of Red Hat Enterprise Linux, libvirt permitted PCI devices to be insecurely assigned to guests. In Red Hat Enterprise Linux 6, assignment of insecure devices is disabled by default by libvirt . However, this may cause assignment of previously working devices to start failing. To enable the old, insecure setting, edit the /etc/libvirt/qemu.conf file, set the relaxed_acs_check = 1 parameter, and restart libvirtd ( service libvirtd restart ). Note that this action will re-open possible security issues. virtio-win component, BZ# 615928 The balloon service on Windows 7 guests can only be started by the Administrator user. libvirt component, BZ# 622649 libvirt uses transient iptables rules for managing NAT or bridging to virtual machine guests. Any external command that reloads the iptables state (such as running system-config-firewall ) will overwrite the entries needed by libvirt . Consequently, after running any command or tool that changes the state of iptables , guests may lose access the network. To work around this issue, use the service libvirt reload command to restore libvirt 's additional iptables rules. virtio-win component, BZ# 612801 A Windows virtual machine must be restarted after the installation of the kernel Windows driver framework. If the virtual machine is not restarted, it may crash when a memory balloon operation is performed. qemu-kvm component, BZ# 720597 Installation of Windows 7 Ultimate x86 (32-bit) Service Pack 1 on a guest with more than 4GB of RAM and more than one CPU from a DVD medium often crashes during the final steps of the installation process due to a system hang. To work around this issue, use the Windows Update utility to install the Service Pack. qemu-kvm component, BZ# 612788 A dual function Intel 82576 Gigabit Ethernet Controller interface (codename: Kawela, PCI Vendor/Device ID: 8086:10c9) cannot have both physical functions (PF's) device-assigned to a Windows 2008 guest. Either physical function can be device assigned to a Windows 2008 guest (PCI function 0 or function 1), but not both. virt-v2v component In Red Hat Enterprise Linux 6.2, the default virt-v2v configuration is split into two files: /etc/virt-v2v.conf and /var/lib/virt-v2v/virt-v2v.db . The former now contains only local customizations, whereas the latter contains generic configuration which is not intended to be customized. Prior to Red Hat Enterprise Linux 6.2, virt-v2v 's -f flag defaulted to /etc/virt-v2v.conf . In Red Hat Enterprise Linux 6.2, it now defaults to both /etc/virt-v2v.conf and /var/lib/virt-v2v/virt-v2v.db . Data from both of these files is required during conversion. This change has no impact for most users. If a machine is upgraded from Red Hat Enterprise Linux 6.1 to Red Hat Enterprise Linux 6.2, the existing combined /etc/virt-v2v.conf will not be updated. If a user explicitly specifies -f /etc/virt-v2v.conf on the command line, the behavior will be identical to the one prior to update. If the user does not specify the -f command line option, the configuration will use both /etc/virt-v2v.conf and /var/lib/virt-v2v/virt-v2v.db , with the former taking precedence. However, a freshly-installed Red Hat Enterprise Linux 6.2 machine with a default configuration no longer has all required data in /etc/virt-v2v.conf . If the user explicitly specifies -f /etc/virt-v2v.conf on the command line, virt-v2v will not be able to enable virtio support for any guests. To work around this issue, do use the -f command line option, as this defaults to using both configuration files. If the -f command line option is used, it must be specified twice: first for /etc/virt-v2v.conf and second for /var/lib/virt-v2v/virt-v2v.conf . If the virt-v2v command line cannot be altered, the /etc/virt-v2v.conf file must contain a combined configuration file. This can be copied from a Red Hat Enterprise Linux 6.1 system, or created by copying all configuration elements from /var/lib/virt-v2v/virt-v2v.db to /etc/virt-v2v.conf . virt-v2v component, BZ# 618091 The virt-v2v utility is able to convert guests running on an ESX server. However, if an ESX guest has a disk with a snapshot, the snapshot must be on the same datastore as the underlying disk storage. If the snapshot and the underlying storage are on different datastores, virt-v2v will report a 404 error while trying to retrieve the storage. virt-v2v component, BZ# 678232 The VMware Tools application on Microsoft Windows is unable to disable itself when it detects that it is no longer running on a VMware platform. Consequently, converting a Microsoft Windows guest from VMware ESX, which has VMware Tools installed, will result in errors. These errors usually manifest as error messages on start-up, and a "Stop Error" (also known as a BSOD) when shutting down the guest. To work around this issue, uninstall VMware Tools on Microsoft Windows guests prior to conversion. spice-client component Sound recording only works when there is no application accessing the recording device at the client start-up.
[ "~]# mkdir /root/.ssh ~]# chmod 0700 /root/.ssh ~]# restorecon /root/.ssh" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/virtualization_issues
Chapter 53. Installing and running Oracle WebLogic Server
Chapter 53. Installing and running Oracle WebLogic Server Oracle WebLogic Server must be installed and running for you to apply many of the configurations that accommodate KIE Server. This section describes how to install and start Oracle WebLogic Server in a standalone Oracle WebLogic Server domain. For the most up-to-date and detailed installation instructions, see the Oracle WebLogic Server product page . Note If you are already running an instance of Oracle WebLogic Server that uses the same listener port as the one to be used by the server you are starting, you must stop the first server before starting the second server. Procedure Download Oracle WebLogic Server 12.2.1.3.0 or later from the Oracle WebLogic Server Downloads page . Sign in to the target system and verify that a certified JDK already exists on your system. The installer requires a certified JDK. For system requirements, see Oracle Fusion Middleware Systems Requirements and Specifications . To download the JDK, see the "About JDK Requirements for an Oracle Fusion Middleware Installation" section in Planning an Installation of Oracle Fusion Middleware . Navigate to the directory where you downloaded the installation program. To launch the installation program, run java -jar from the JDK directory on your system, as shown in the following examples: On UNIX-based operating systems, enter the following command: On Windows operating systems, enter the following command: Replace the JDK location in these examples with the actual JDK location on your system. Follow the installation wizard prompts to complete the installation. After the installation is complete, navigate to the WLS_HOME /user_projects/<DOMAIN_NAME> directory where <DOMAIN_NAME> is the domain directory. In the following example, mydomain is the domain directory: Enter one of the following commands to start Oracle WebLogic Server: On UNIX-based operating systems, enter the following command: On Windows operating systems, enter the following command: The startup script displays a series of messages, and finally displays a message similar to the following: Open the following URL in a web browser: In this command, replace the following place holders: Replace <HOST> with the system name or IP address of the host server. Replace <PORT> with the number of the port on which the host server is listening for requests (7001 by default). For example, to start the Administration Console for a local instance of Oracle WebLogic Server running on your system, enter the following URL in a web browser: If you started the Administration Console using secure socket layer (SSL), you must add s after http , as follows: https://<HOST>:<PORT>/console When the login page of the WebLogic Administration Console appears, enter your administrative credentials.
[ "/home/Oracle/jdk/jdk1.8.0_131/bin/java -jar fmw_12.2.1.3.0_wls_generic.jar", "C:\\Program Files\\Java\\jdk1.8.0_131\\bin\\java -jar fmw_12.2.1.3.0_wls_generic.jar", "WLS\\user_projects\\mydomain", "startWebLogic.sh", "startWebLogic.cmd", "<Dec 8, 2017 3:50:42 PM PDT> <Notice> <WebLogicServer> <000360> <Server started in RUNNING mode>", "http://<HOST>:<PORT>/console", "http://localhost:7001/console/" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/wls-install-start-proc
Planning your deployment
Planning your deployment Red Hat OpenShift Data Foundation 4.17 Important considerations when deploying Red Hat OpenShift Data Foundation 4.17 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see Installing on IBM Power . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. For Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly a pre-existing one. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions . Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE ROSA with hosted control planes (HCP) [Technology Preview] Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications. Chapter 4. External storage services Red Hat OpenShift Data Foundation can use IBM FlashSystems or make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters. Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. Cluster wide encryption is supported in OpenShift Data Foundation 4.6 without Key Management System (KMS). Starting with OpenShift Data Foundation 4.7, it supports with and without HashiCorp Vault KMS. Starting with OpenShift Data Foundation 4.12, it supports with and without both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Security common practices require periodic encryption key rotation. Red Hat OpenShift Data Foundation automatically rotates encryption keys stored in kubernetes secret (non-KMS) weekly. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected then the administrator has to provide the vault token that provides access to the backend path in Vault, where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected then the administrator has to provide the name of the role configured in Vault that provides access to the backend path, where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. This method is available from OpenShift Data Foundation 4.10. Currently, HashiCorp Vault is the only supported KMS. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault KV secret engine, API version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol (msgr2) Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment while the cluster is being created. See the deployment guide for your environment for instructions on enabling data encryption in-transit during cluster creation. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx. Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx. Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation. Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. Virtualized OpenShift nodes using logical CPU threads, also known as simultaneous multithreading (SMT) for AMD EPYC CPUs or hyperthreading with Intel CPUs, calculate their core utilization for OpenShift subscriptions based on the number of cores/CPUs assigned to the node, however each subscription covers 4 vCPUs/cores when logical CPU threads are used. Red Hat's subscription management tools assume logical CPU threads are enabled by default on all systems. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.17 is supported only on OpenShift Container Platform version 4.17 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 7.0 or later vSphere 8.0 or later For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, an azure disk via the azure-disk provisioner. 7.1.5. Google Cloud Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.9. ROSA with hosted control planes (HCP) [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides AWS EBS volumes via gp3-csi provisioner. 7.1.10. Any platform Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.2.2. IBM FlashSystem To use IBM FlashSystem as a pluggable external storage on other providers, you need to first deploy it before you can deploy OpenShift Data Foundation, which would use the IBM FlashSystem storage class as a backing storage. For the latest supported FlashSystem storage systems and versions, see IBM ODF FlashSystem driver documentation . For instructions on how to deploy OpenShift Data Foundation, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate avaliable resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.3.6. Resource requirements for performance profiles OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment. Table 7.7. Recommended resource requirement for different performance profiles Performance profile CPU Memory Lean 24 72 GiB Balanced 30 72 GiB Performance 45 96 GiB Important Make sure to select the profiles based on the available free resources as you might already be running other workloads. 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.8. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.9. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks . You can configure your Multus networks to use IPv4 or IPv6 as a technology preview. This works only for Multus networks that are pure IPv4 or pure IPv6. Networks cannot be mixed mode. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information. 8.2.1. Multus prerequisites In order for Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. This section will help clarify questions that could arise. Two basic requirements must be met: OpenShift hosts must be able to route successfully to the Multus public network. Pods on the Multus public network must be able to route successfully to OpenShift hosts. These two requirements can be broken down further as follows: For routing Kubernetes hosts to the Multus public network, each host must ensure the following: The host must have an interface connected to the Multus public network (the "public-network-interface"). The "public-network-interface" must have an IP address. A route must exist to direct traffic destined for pods on the Multus public network through the "public-network-interface". For routing pods on the Multus public network to Kubernetes hosts, the public NetworkAttachmentDefinition must be configured to ensure the following: The definition must have its IP Address Management (IPAM) configured to route traffic destined for nodes through the network. To ensure routing between the two networks works properly, no IP address assigned to a node can overlap with any IP address assigned to a pod on the Multus public network. Generally, both the NetworkAttachmentDefinition, and node configurations must use the same network technology (Macvlan) to connect to the Multus public network. Node configurations and pod configurations are interrelated and tightly coupled. Both must be planned at the same time, and OpenShift Data Foundation cannot support Multus public networks without both. The "public-network-interface" must be the same for both. Generally, the connection technology (Macvlan) should also be the same for both. IP range(s) in the NetworkAttachmentDefinition must be encoded as routes on nodes, and, in mirror, IP ranges for nodes must be encoded as routes in the NetworkAttachmentDefinition. Some installations might not want to use the same public network IP address range for both pods and nodes. In the case where there are different ranges for pods and nodes, additional steps must be taken to ensure each range routes to the other so that they act as a single, contiguous network.These requirements require careful planning. See Multus examples to help understand and implement these requirements. Tip There are often ten or more OpenShift Data Foundation pods per storage node. The pod address space usually needs to be several times larger (or more) than the host address space. OpenShift Container Platform recommends using the NMState operator's NodeNetworkConfigurationPolicies as a good method of configuring hosts to meet host requirements. Other methods can be used as well if needed. 8.2.1.1. Multus network address space sizing Networks must have enough addresses to account for the number of storage pods that will attach to the network, plus some additional space to account for failover events. It is highly recommended to also plan ahead for future storage cluster expansion and estimate how large the OpenShift Container Platform and OpenShift Data Foundation clusters may grow in the future. Reserving addresses for future expansion means that there is lower risk of depleting the IP address pool unexpectedly during expansion. It is safest to allocate 25% more addresses (or more) than the total maximum number of addresses that are expected to be needed at one time in the storage cluster's lifetime. This helps lower the risk of depleting the IP address pool during failover and maintenance. For ease of writing corresponding network CIDR configurations, rounding totals up to the nearest power of 2 is also recommended. Three ranges must be planned: If used, the public Network Attachment Definition address space must include enough IPs for the total number of ODF pods running in the openshift-storage namespace If used, the cluster Network Attachment Definition address space must include enough IPs for the total number of OSD pods running in the openshift-storage namespace If the Multus public network is used, the node public network address space must include enough IPs for the total number of OpenShift nodes connected to the Multus public network. Note If the cluster uses a unified address space for the public Network Attachment Definition and node public network attachments, add these two requirements together. This is relevant, for example, if DHCP is used to manage IPs for the public network. 8.2.1.1.1. Recommendation The following recommendation suffices for most organizations. The recommendation uses the last 6.25% (1/16) of the reserved private address space (192.168.0.0/16), assuming the beginning of the range is in use or otherwise desirable. Approximate maximums (accounting for 25% overhead) are given. Table 8.1. Multus recommendations Network Network range CIDR Approximate maximums Public Network Attachment Definition 192.168.240.0/21 1,600 total ODF pods Cluster Network Attachment Definition 192.168.248.0/22 800 OSDs Node public network attachments 192.168.252.0/23 400 total nodes 8.2.1.1.2. Calculation More detailed address space sizes can be determined as follows: Determine the maximum number of OSDs that are likely to be needed in the future. Add 25%, then add 5. Round the result up to the nearest power of 2. This is the cluster address space size. Begin with the un-rounded number calculated in step 1. Add 64, then add 25%. Round the result up to the nearest power of 2. This is the public address space size for pods. Determine the maximum number of total OpenShift nodes (including storage nodes) that are likely to be needed in the future. Add 25%. Round the result up to the nearest power of 2. This is the public address space size for nodes. 8.2.1.2. Verifying requirements have been met After configuring nodes and creating the Multus public NetworkAttachmentDefinition (see Creating network attachment definitions ) check that the node configurations and NetworkAttachmentDefinition configurations are compatible. To do so, verify that each node can ping pods via the public network. Start a daemonset similar to the following example: List the Multus public network IPs assigned to test pods using a command like the following example. This example command lists all IPs assigned to all test pods (each will have 2 IPs). From the output, it is easy to manually extract the IPs associated with the Multus public network. In the example, test pod IPs on the Multus public network are: 192.168.20.22 192.168.20.29 192.168.20.23 Check that each node (NODE) can reach all test pod IPs over the public network: If any node does not get a successful ping to a running pod, it is not safe to proceed. Diagnose and fix the issue, then repeat this testing. Some reasons you may encounter a problem include: The host may not be properly attached to the Multus public network (via Macvlan) The host may not be properly configured to route to the pod IP range The public NetworkAttachmentDefinition may not be properly configured to route back to the host IP range The host may have a firewall rule blocking the connection in either direction The network switch may have a firewall or security rule blocking the connection Suggested debugging steps: Ensure nodes can ping each other over using public network "shim" IPs Ensure the output of ip address 8.2.2. Multus examples The relevant network plan for this cluster is as follows: A dedicated NIC provides eth0 for the Multus public network Macvlan will be used to attach OpenShift pods to eth0 The IP range 192.168.0.0/16 is free in the example cluster - pods and nodes will share this IP range on the Multus public network Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 Kubernetes hosts, more than the example organization will ever need) Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) The example organization does not want to use DHCP unless necessary; therefore, nodes will have IPs on the Multus network (via eth0) assigned statically using the NMState operator 's NodeNetworkConfigurationPolicy resources With DHCP unavailable, Whereabouts will be used to assign IPs to the Multus public network because it is easy to use out of the box There are 3 compute nodes in the OpenShift cluster on which OpenShift Data Foundation also runs: compute-0, compute-1, and compute-2 Nodes' network policies must be configured to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Generally speaking, the host must connect to the Multus public network using the same technology that pods do. Pod connections are configured in the Network Attachment Definition. Because the host IP range is a subset of the whole range, hosts are not able to route to pods simply by IP assignment. A route must be added to hosts to allow them to route to the whole 192.168.0.0/16 range. NodeNetworkConfigurationPolicy desiredState specs will look like the following: For static IP management, each node must have a different NodeNetworkConfigurationPolicy. Select separate nodes for each policy to configure static networks. A "shim" interface is used to connect hosts to the Multus public network using the same technology as the Network Attachment Definition will use. The host's "shim" must be of the same type as planned for pods, macvlan in this example. The interface must match the Multus public network interface selected in planning, eth0 in this example. The ipv4 (or ipv6` ) section configures node IP addresses on the Multus public network. IPs assigned to this node's shim must match the plan. This example uses 192.168.252.0/22 for node IPs on the Multus public network. For static IP management, don't forget to change the IP for each node. The routes section instructs nodes how to reach pods on the Multus public network. The route destination(s) must match the CIDR range planned for pods. In this case, it is safe to use the entire 192.168.0.0/16 range because it won't affect nodes' ability to reach other nodes over their "shim" interfaces. In general, this must match the CIDR used in the Multus public NetworkAttachmentDefinition. The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' exclude option to simplify the range request. The Whereabouts routes[].dst option ensures pods route to hosts via the Multus public network. This must match the plan for how to attach pods to the Multus public network. Nodes must attach using the same technology, Macvlan. The interface must match the Multus public network interface selected in planning, eth0 in this example. The plan for this example uses whereabouts instead of DHCP for assigning IPs to pods. For this example, it was decided that pods could be assigned any IP in the range 192.168.0.0/16 with the exception of a portion of the range allocated to nodes (see 5). whereabouts provides an exclude directive that allows easily excluding the range allocated for nodes from its pool. This allows keeping the range directive (see 4 ) simple. The routes section instructs pods how to reach nodes on the Multus public network. The route destination ( dst ) must match the CIDR range planned for nodes. 8.2.3. Holder pod deprecation Due to the recurring maintenance impact of holder pods during upgrade (holder pods are present when Multus is enabled), holder pods are deprecated in the ODF v4.17 release and targeted for removal in the ODF v4.17 release. This deprecation requires completing additional network configuration actions before removing the holder pods. In ODF v4.15, clusters with Multus enabled are upgraded to v4.16 following standard upgrade procedures. After the ODF cluster (with Multus enabled) is successfully upgraded to v4.16, administrators must then complete the procedure documented in the article Disabling Multus holder pods to disable and remove holder pods. Be aware that this disabling procedure is time consuming; however, it is not critical to complete the entire process immediately after upgrading to v4.16. It is critical to complete the process before ODF is upgraded to v4.17. 8.2.4. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.5. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.6. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports the macvlan driver, which includes the following features: Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Uses less CPU and provides better throughput than Linux bridge or ipvlan . Bridge mode is almost always the best choice. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.7. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal. Chapter 9. Disaster Recovery Disaster Recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. OpenShift Data Foundation provides High Availability (HA) & DR solutions for stateful apps which are broadly categorized into two broad categories: Metro-DR : Single Region and cross data center protection with no data loss. Regional-DR : Cross Region protection with minimal potential data loss. Disaster Recovery with stretch cluster : Single OpenShift Data Foundation cluster is stretched between two different locations to provide the storage infrastructure with disaster recovery capabilities. 9.1. Metro-DR Metropolitan disaster recovery (Metro-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM), Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. This release of Metro-DR solution provides volume persistent data and metadata replication across sites that are geographically dispersed. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. Note Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. For detailed solution requirements, see Metro-DR requirements , deployment requirements for Red Hat Ceph Storage stretch cluster with arbiter and RHACM requirements . 9.2. Regional-DR Regional disaster recovery (Regional-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM) and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. It is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. Red Hat OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook and it's enhanced with the ability to: Enable pools for mirroring. Automatically mirror images across RBD pools. Provides csi-addons to manage per Persistent Volume Claim mirroring. This release of Regional-DR supports Multi-Cluster configuration that is deployed across different regions and data centers. For example, a 2-way replication across two managed clusters located in two different regions or data centers. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Regional disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For detailed solution requirements, see Regional-DR requirements and RHACM requirements . 9.3. Disaster Recovery with stretch cluster In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This feature is currently intended for deployment in the OpenShift Container Platform on-premises and in the same location. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. To use the stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see Configuring OpenShift Data Foundation for stretch cluster . To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Supported Unsupported Disaster Recovery for brownfield deployments Unsupported Supported Automatic scaling of RGW Unsupported Unsupported Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z Deploying OpenShift Data Foundation on any platform External mode Deploying OpenShift Data Foundation in external mode Internal or external For deploying multiple clusters, see Deploying multiple OpenShift Data Foundation clusters .
[ "apiVersion: apps/v1 kind: DaemonSet metadata: name: multus-public-test namespace: openshift-storage labels: app: multus-public-test spec: selector: matchLabels: app: multus-public-test template: metadata: labels: app: multus-public-test annotations: k8s.v1.cni.cncf.io/networks: openshift-storage/public-net # spec: containers: - name: test image: quay.io/ceph/ceph:v18 # image known to have 'ping' installed command: - sleep - infinity resources: {}", "oc -n openshift-storage describe pod -l app=multus-public-test | grep -o -E 'Add .* from .*' Add eth0 [10.128.2.86/23] from ovn-kubernetes Add net1 [192.168.20.22/24] from default/public-net Add eth0 [10.129.2.173/23] from ovn-kubernetes Add net1 [192.168.20.29/24] from default/public-net Add eth0 [10.131.0.108/23] from ovn-kubernetes Add net1 [192.168.20.23/24] from default/public-net", "oc debug node/NODE Starting pod/NODE-debug To use host binaries, run `chroot /host` Pod IP: **** If you don't see a command prompt, try pressing enter. sh-5.1# chroot /host sh-5.1# ping 192.168.20.22 PING 192.168.20.22 (192.168.20.22) 56(84) bytes of data. 64 bytes from 192.168.20.22: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 192.168.20.22: icmp_seq=2 ttl=64 time=0.056 ms ^C --- 192.168.20.22 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1046ms rtt min/avg/max/mdev = 0.056/0.074/0.093/0.018 ms sh-5.1# ping 192.168.20.29 PING 192.168.20.29 (192.168.20.29) 56(84) bytes of data. 64 bytes from 192.168.20.29: icmp_seq=1 ttl=64 time=0.403 ms 64 bytes from 192.168.20.29: icmp_seq=2 ttl=64 time=0.181 ms ^C --- 192.168.20.29 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1007ms rtt min/avg/max/mdev = 0.181/0.292/0.403/0.111 ms sh-5.1# ping 192.168.20.23 PING 192.168.20.23 (192.168.20.23) 56(84) bytes of data. 64 bytes from 192.168.20.23: icmp_seq=1 ttl=64 time=0.329 ms 64 bytes from 192.168.20.23: icmp_seq=2 ttl=64 time=0.227 ms ^C --- 192.168.20.23 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1047ms rtt min/avg/max/mdev = 0.227/0.278/0.329/0.051 ms", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-0 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-0 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-0 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-1 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-1 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-1 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-2 # [1] namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-2 # [2] desiredState: Interfaces: [3] - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan # [4] state: up mac-vlan: base-iface: eth0 # [5] mode: bridge promiscuous: true ipv4: # [6] enabled: true dhcp: false address: - ip: 192.168.252.2 # STATIC IP FOR compute-2 # [7] prefix-length: 22 routes: # [8] config: - destination: 192.168.0.0/16 # [9] next-hop-interface: odf-pub-shim", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", # [1] \"master\": \"eth0\", # [2] \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", # [3] \"range\": \"192.168.0.0/16\", # [4] \"exclude\": [ \"192.168.252.0/22\" # [5] ], \"routes\": [ # [6] {\"dst\": \"192.168.252.0/22\"} # [7] ] } }'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/planning_your_deployment/dynamic_storage_devices
18.6. JDBC Based Cache Stores
18.6. JDBC Based Cache Stores Red Hat JBoss Data Grid offers several cache stores for use with common data storage formats. JDBC based cache stores are used with any cache store that exposes a JDBC driver. JBoss Data Grid offers the following JDBC based cache stores depending on the key to be persisted: JdbcBinaryStore . JdbcStringBasedStore . JdbcMixedStore . Report a bug 18.6.1. JDBC Datasource Configuration There are three methods of obtaining a connection to the database: connectionPool dataSource simpleConnection connectionPool A connectionPool uses a JDBC driver to establish a connection, along with creating a pool of threads which may be used for any requests. This configuration is typically used in stand-alone environments that require ready access to a database without having the overhead of creating the database connection on each invocation. dataSource A dataSource is a connection factory that understands how to reference a JNDI tree and delegate connections to the datasource. This configuration is typically used in environments where a datasource has already been configured outside of JBoss Data Grid. simpleConnection The simpleConnection is similar to connectionPool in that it uses a JDBC driver to establish each connection; however, instead of creating a pool of threads that are available a new connection is created on each invocation. This configuration is typically used in test environments, or where only a single connection is required. All of these may be configured declaratively, using the <connectionPool /> , <dataSource /> , or <simpleConnection /> elements. Alternatively, these may be configured programmatically with the connectionPool() , dataSource() , or simpleConnection() methods on the JdbcBinaryStoreConfigurationBuilde , JdbcMixedStoreConfigurationBuilder , or JdbcBinaryStoreConfigurationBuilder classes. Report a bug 18.6.2. JdbcBinaryStores The JdbcBinaryStore supports all key types. It stores all keys with the same hash value ( hashCode method on the key) in the same table row/blob. The hash value common to the included keys is set as the primary key for the table row/blob. As a result of this hash value, JdbcBinaryStore offers excellent flexibility but at the cost of concurrency and throughput. As an example, if three keys ( k1 , k2 and k3 ) have the same hash code, they are stored in the same table row. If three different threads attempt to concurrently update k1 , k2 and k3 , they must do it sequentially because all three keys share the same row and therefore cannot be simultaneously updated. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 18.6.2.1. JdbcBinaryStore Configuration (Remote Client-Server Mode) The following is a configuration for JdbcBinaryStore using Red Hat JBoss Data Grid's Remote Client-Server mode with Passivation enabled: For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.6.2.2. JdbcBinaryStore Configuration (Library Mode) The following is a sample configuration for the JdbcBinaryStore : For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Report a bug 18.6.2.3. JdbcBinaryStore Programmatic Configuration The following is a sample configuration for the JdbcBinaryStore : Procedure 18.4. JdbcBinaryStore Programmatic Configuration (Library Mode) Use the ConfigurationBuilder to create a new configuration object. Add the JdbcBinaryStore configuration builder to build a specific configuration related to this store. The fetchPersistentState element determines whether or not to fetch the persistent state of a cache and apply it to the local cache store when joining the cluster. If the cache store is shared the fetch persistent state is ignored, as caches access the same cache store. A configuration exception will be thrown when starting the cache service if more than one cache loader has this property set to true . The fetchPersistentState property is false by default. The ignoreModifications element determines whether write methods are pushed to the specific cache loader by allowing write operations to the local file cache loader, but not the shared cache loader. In some cases, transient application data should only reside in a file-based cache loader on the same server as the in-memory cache. For example, this would apply with a further JDBC based cache loader used by all servers in the network. ignoreModifications is false by default. The purgeOnStartup element specifies whether the cache is purged when initially started. Configure the table as follows: dropOnExit determines if the table will be dropped when the cache store is stopped. This is set to false by default. createOnStart creates the table when starting the cache store if no table currently exists. This method is true by default. tableNamePrefix sets the prefix for the name of the table in which the data will be stored. The idColumnName property defines the column where the cache key or bucket ID is stored. The dataColumnName property specifies the column where the cache entry or bucket is stored. The timestampColumnName element specifies the column where the time stamp of the cache entry or bucket is stored. The connectionPool element specifies a connection pool for the JDBC driver using the following parameters: The connectionUrl parameter specifies the JDBC driver-specific connection URL. The username parameter contains the user name used to connect via the connectionUrl . The driverClass parameter specifies the class name of the driver used to connect to the database. Note Programmatic configurations can only be used with Red Hat JBoss Data Grid Library mode. Report a bug 18.6.3. JdbcStringBasedStores The JdbcStringBasedStore stores each entry in its own row in the table, instead of grouping multiple entries into each row, resulting in increased throughput under a concurrent load. It also uses a (pluggable) bijection that maps each key to a String object. The Key2StringMapper interface defines the bijection. Red Hat JBoss Data Grid includes a default implementation called DefaultTwoWayKey2StringMapper that handles primitive types. Report a bug 18.6.3.1. JdbcStringBasedStore Configuration (Remote Client-Server Mode) The following is a sample JdbcStringBasedStore for Red Hat JBoss Data Grid's Remote Client-Server mode: For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.6.3.2. JdbcStringBasedStore Configuration (Library Mode) The following is a sample configuration for the JdbcStringBasedStore : For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Report a bug 18.6.3.3. JdbcStringBasedStore Multiple Node Configuration (Remote Client-Server Mode) The following is a configuration for the JdbcStringBasedStore in Red Hat JBoss Data Grid's Remote Client-Server mode. This configuration is used when multiple nodes must be used. For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.6.3.4. JdbcStringBasedStore Programmatic Configuration The following is a sample configuration for the JdbcStringBasedStore : Procedure 18.5. Configure the JdbcStringBasedStore Programmatically Use the ConfigurationBuilder to create a new configuration object. Add the JdbcStringBasedStore configuration builder to build a specific configuration related to this store. The fetchPersistentState parameter determines whether or not to fetch the persistent state of a cache and apply it to the local cache store when joining the cluster. If the cache store is shared the fetch persistent state is ignored, as caches access the same cache store. A configuration exception will be thrown when starting the cache service if more than one cache loader has this property set to true . The fetchPersistentState property is false by default. The ignoreModifications parameter determines whether write methods are pushed to the specific cache loader by allowing write operations to the local file cache loader, but not the shared cache loader. In some cases, transient application data should only reside in a file-based cache loader on the same server as the in-memory cache. For example, this would apply with a further JDBC based cache loader used by all servers in the network. ignoreModifications is false by default. The purgeOnStartup parameter specifies whether the cache is purged when initially started. Configure the Table dropOnExit determines if the table will be dropped when the cache store is stopped. This is set to false by default. createOnStart creates the table when starting the cache store if no table currently exists. This method is true by default. tableNamePrefix sets the prefix for the name of the table in which the data will be stored. The idColumnName property defines the column where the cache key or bucket ID is stored. The dataColumnName property specifies the column where the cache entry or bucket is stored. The timestampColumnName element specifies the column where the time stamp of the cache entry or bucket is stored. The dataSource element specifies a data source using the following parameters: The jndiUrl specifies the JNDI URL to the existing JDBC. Note Programmatic configurations can only be used with Red Hat JBoss Data Grid Library mode. Report a bug 18.6.4. JdbcMixedStores The JdbcMixedStore is a hybrid implementation that delegates keys based on their type to either the JdbcBinaryStore or JdbcStringBasedStore . Report a bug 18.6.4.1. JdbcMixedStore Configuration (Remote Client-Server Mode) The following is a configuration for a JdbcMixedStore for Red Hat JBoss Data Grid's Remote Client-Server mode: For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.6.4.2. JdbcMixedStore Configuration (Library Mode) The following is a sample configuration for the mixedKeyedJdbcStore : For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Report a bug 18.6.4.3. JdbcMixedStore Programmatic Configuration The following is a sample configuration for the JdbcMixedStore : Procedure 18.6. Configure JdbcMixedStore Programmatically Use the ConfigurationBuilder to create a new configuration object. Add the JdbcMixedStore configuration builder to build a specific configuration related to this store. The fetchPersistentState parameter determines whether or not to fetch the persistent state of a cache and apply it to the local cache store when joining the cluster. If the cache store is shared the fetch persistent state is ignored, as caches access the same cache store. A configuration exception will be thrown when starting the cache service if more than one cache loader has this property set to true . The fetchPersistentState property is false by default. The ignoreModifications parameter determines whether write methods are pushed to the specific cache loader by allowing write operations to the local file cache loader, but not the shared cache loader. In some cases, transient application data should only reside in a file-based cache loader on the same server as the in-memory cache. For example, this would apply with a further JDBC based cache loader used by all servers in the network. ignoreModifications is false by default. The purgeOnStartup parameter specifies whether the cache is purged when initially started. Configure the table as follows: dropOnExit determines if the table will be dropped when the cache store is stopped. This is set to false by default. createOnStart creates the table when starting the cache store if no table currently exists. This method is true by default. tableNamePrefix sets the prefix for the name of the table in which the data will be stored. The idColumnName property defines the column where the cache key or bucket ID is stored. The dataColumnName property specifies the column where the cache entry or bucket is stored. The timestampColumnName element specifies the column where the time stamp of the cache entry or bucket is stored. The connectionPool element specifies a connection pool for the JDBC driver using the following parameters: The connectionUrl parameter specifies the JDBC driver-specific connection URL. The username parameter contains the username used to connect via the connectionUrl . The driverClass parameter specifies the class name of the driver used to connect to the database. Note Programmatic configurations can only be used with Red Hat JBoss Data Grid Library mode. Report a bug 18.6.5. Cache Store Troubleshooting 18.6.5.1. IOExceptions with JdbcStringBasedStore An IOException Unsupported protocol version 48 error when using JdbcStringBasedStore indicates that your data column type is set to VARCHAR , CLOB or something similar instead of the correct type, BLOB or VARBINARY . Despite its name, JdbcStringBasedStore only requires that the keys are strings while the values can be any data type, so that they can be stored in a binary column. Report a bug
[ "<local-cache name=\"customCache\"> <!-- Additional configuration elements here --> <binary-keyed-jdbc-store datasource=\"java:jboss/datasources/JdbcDS\" passivation=\"USD{true/false}\" preload=\"USD{true/false}\" purge=\"USD{true/false}\"> <binary-keyed-table prefix=\"JDG\"> <id-column name=\"id\" type=\"USD{id.column.type}\"/> <data-column name=\"datum\" type=\"USD{data.column.type}\"/> <timestamp-column name=\"version\" type=\"USD{timestamp.column.type}\"/> </binary-keyed-table> </binary-keyed-jdbc-store> </local-cache>", "<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd urn:infinispan:config:jdbc:6.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-6.0.xsd\" xmlns=\"urn:infinispan:config:6.0\"> <!-- Additional configuration elements here --> <persistence> <binaryKeyedJdbcStore xmlns=\"urn:infinispan:config:jdbc:6.0\" fetchPersistentState=\"false\" ignoreModifications=\"false\" purgeOnStartup=\"false\"> <connectionPool connectionUrl=\"jdbc:h2:mem:infinispan_binary_based;DB_CLOSE_DELAY=-1\" username=\"sa\" driverClass=\"org.h2.Driver\"/> <binaryKeyedTable dropOnExit=\"true\" createOnStart=\"true\" prefix=\"ISPN_BUCKET_TABLE\"> <idColumn name=\"ID_COLUMN\" type=\"VARCHAR(255)\" /> <dataColumn name=\"DATA_COLUMN\" type=\"BINARY\" /> <timestampColumn name=\"TIMESTAMP_COLUMN\" type=\"BIGINT\" /> </binaryKeyedTable> </binaryKeyedJdbcStore> </persistence>", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .addStore(JdbcBinaryStoreConfigurationBuilder.class) .fetchPersistentState(false) .ignoreModifications(false) .purgeOnStartup(false) .table() .dropOnExit(true) .createOnStart(true) .tableNamePrefix(\"ISPN_BUCKET_TABLE\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"BINARY\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .connectionPool() .connectionUrl(\"jdbc:h2:mem:infinispan_binary_based;DB_CLOSE_DELAY=-1\") .username(\"sa\") .driverClass(\"org.h2.Driver\");", "<local-cache name=\"customCache\"> <!-- Additional configuration elements here --> <string-keyed-jdbc-store datasource=\"java:jboss/datasources/JdbcDS\" passivation=\"true\" preload=\"false\" purge=\"false\" shared=\"false\" singleton=\"true\"> <string-keyed-table prefix=\"JDG\"> <id-column name=\"id\" type=\"USD{id.column.type}\"/> <data-column name=\"datum\" type=\"USD{data.column.type}\"/> <timestamp-column name=\"version\" type=\"USD{timestamp.column.type}\"/> </string-keyed-table> </string-keyed-jdbc-store> </local-cache>", "<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd urn:infinispan:config:jdbc:6.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-6.0.xsd\" xmlns=\"urn:infinispan:config:6.0\"> <!-- Additional configuration elements here --> <persistence> <stringKeyedJdbcStore xmlns=\"urn:infinispan:config:jdbc:6.0\" fetchPersistentState=\"false\" ignoreModifications=\"false\" purgeOnStartup=\"false\" key2StringMapper=\"org.infinispan.loaders.keymappers.DefaultTwoWayKey2StringMapper\"> <dataSource jndiUrl=\"java:jboss/datasources/JdbcDS\"/> <stringKeyedTable dropOnExit=\"true\" createOnStart=\"true\" prefix=\"ISPN_STRING_TABLE\"> <idColumn name=\"ID_COLUMN\" type=\"VARCHAR(255)\" /> <dataColumn name=\"DATA_COLUMN\" type=\"BINARY\" /> <timestampColumn name=\"TIMESTAMP_COLUMN\" type=\"BIGINT\" /> </stringKeyedTable> </stringKeyedJdbcStore> </persistence>", "<subsystem xmlns=\"urn:infinispan:server:core:6.1\" default-cache-container=\"default\"> <cache-container <!-- Additional configuration information here --> > <!-- Additional configuration elements here --> <replicated-cache> <!-- Additional configuration elements here --> <string-keyed-jdbc-store datasource=\"java:jboss/datasources/JdbcDS\" fetch-state=\"true\" passivation=\"false\" preload=\"false\" purge=\"false\" shared=\"false\" singleton=\"true\"> <string-keyed-table prefix=\"JDG\"> <id-column name=\"id\" type=\"USD{id.column.type}\"/> <data-column name=\"datum\" type=\"USD{data.column.type}\"/> <timestamp-column name=\"version\" type=\"USD{timestamp.column.type}\"/> </string-keyed-table> </string-keyed-jdbc-store> </replicated-cache> </cache-container> </subsystem>", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class) .fetchPersistentState(false) .ignoreModifications(false) .purgeOnStartup(false) .table() .dropOnExit(true) .createOnStart(true) .tableNamePrefix(\"ISPN_STRING_TABLE\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"BINARY\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .dataSource() .jndiUrl(\"java:jboss/datasources/JdbcDS\");", "<local-cache name=\"customCache\"> <mixed-keyed-jdbc-store datasource=\"java:jboss/datasources/JdbcDS\" passivation=\"true\" preload=\"false\" purge=\"false\"> <binary-keyed-table prefix=\"MIX_BKT2\"> <id-column name=\"id\" type=\"USD{id.column.type}\"/> <data-column name=\"datum\" type=\"USD{data.column.type}\"/> <timestamp-column name=\"version\" type=\"USD{timestamp.column.type}\"/> </binary-keyed-table> <string-keyed-table prefix=\"MIX_STR2\"> <id-column name=\"id\" type=\"USD{id.column.type}\"/> <data-column name=\"datum\" type=\"USD{data.column.type}\"/> <timestamp-column name=\"version\" type=\"USD{timestamp.column.type}\"/> </string-keyed-table> </mixed-keyed-jdbc-store> </local-cache>", "<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd urn:infinispan:config:jdbc:6.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-6.0.xsd\" xmlns=\"urn:infinispan:config:6.0\"> <!-- Additional configuration elements here --> <persistence> <mixedKeyedJdbcStore xmlns=\"urn:infinispan:config:jdbc:6.0\" fetchPersistentState=\"false\" ignoreModifications=\"false\" purgeOnStartup=\"false\" key2StringMapper=\"org.infinispan.persistence.keymappers.DefaultTwoWayKey2StringMapper\"> <connectionPool connectionUrl=\"jdbc:h2:mem:infinispan_binary_based;DB_CLOSE_DELAY=-1\" username=\"sa\" driverClass=\"org.h2.Driver\"/> <binaryKeyedTable dropOnExit=\"true\" createOnStart=\"true\" prefix=\"ISPN_BUCKET_TABLE_BINARY\"> <idColumn name=\"ID_COLUMN\" type=\"VARCHAR(255)\" /> <dataColumn name=\"DATA_COLUMN\" type=\"BINARY\" /> <timestampColumn name=\"TIMESTAMP_COLUMN\" type=\"BIGINT\" /> </binaryKeyedTable> <stringKeyedTable dropOnExit=\"true\" createOnStart=\"true\" prefix=\"ISPN_BUCKET_TABLE_STRING\"> <idColumn name=\"ID_COLUMN\" type=\"VARCHAR(255)\" /> <dataColumn name=\"DATA_COLUMN\" type=\"BINARY\" /> <timestampColumn name=\"TIMESTAMP_COLUMN\" type=\"BIGINT\" /> </stringKeyedTable> </mixedKeyedJdbcStore> </persistence>", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(JdbcMixedStoreConfigurationBuilder.class) .fetchPersistentState(false) .ignoreModifications(false) .purgeOnStartup(false) .stringTable() .dropOnExit(true) .createOnStart(true) .tableNamePrefix(\"ISPN_MIXED_STR_TABLE\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"BINARY\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .binaryTable() .dropOnExit(true) .createOnStart(true) .tableNamePrefix(\"ISPN_MIXED_BINARY_TABLE\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"BINARY\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .connectionPool() .connectionUrl(\"jdbc:h2:mem:infinispan_binary_based;DB_CLOSE_DELAY=-1\") .username(\"sa\") .driverClass(\"org.h2.Driver\");" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-jdbc_based_cache_stores
Chapter 8. Technology Previews
Chapter 8. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.4. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 8.1. Installer and image creation Red Hat Connector available as a Technology Preview You can now connect to a RHEL system with a single command to consume Red Hat Insights and your subscription content. Available as a Technology Preview in Red Hat Enterprise Linux 8.4, the Red Hat connector ( rhc ) CLI unifies the registration experience and eliminates the need to separately run the subscription-manager and insights-client commands to connect to Red Hat. With Red Hat connector and a Smart Management subscription, you can also remediate issues directly from the cloud. For more information, see the Remote Host Configuration and Management . ( BZ#1957316 ) 8.2. Networking Introducing bareudp device support for encapsulating MPLS traffic over UDP tunnel as a Technology Preview The support for bareudp devices is now available with the ip link command as a Technology Preview. The bareudp devices provide L3 encapsulation tunnelling support for routing traffic with different L3 protocols, such as unicast and multicast multi protocol label switching (MPLS) and IPv4/IPv6 inside the UDP tunnel. You can start routing MPLS packets in UDP with the help of adding tc filters and actions. For example, to create a new bareudp device, use the following command: To route MPLS incoming packets in UDP tunnel using the bareudp0 device, use the following command: For more information about options and parameters used while creating bareudp devices, refer to the Bareudp Type Support section in the ip-link(8) man page. ( BZ#1849815 ) AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. (BZ#1633143) KTLS available as a Technology Preview In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality. (BZ#1570255) XDP features that are available as Technology Preview Red Hat provides the usage of the following eXpress Data Path (XDP) features as unsupported Technology Preview: Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the libxdp library is not available for architectures other than AMD and Intel 64-bit. The XDP hardware offloading. ( BZ#1889737 ) Multi-protocol Label Switching for TC available as a Technology Preview The Multi-protocol Label Switching (MPLS) is an in-kernel data-forwarding mechanism to route traffic flow across enterprise networks. In an MPLS network, the router that receives packets decides the further route of the packets based on the labels attached to the packet. With the usage of labels, the MPLS network has the ability to handle packets with particular characteristics. For example, you can add tc filters for managing packets received from specific ports or carrying specific types of traffic, in a consistent way. After packets enter the enterprise network, MPLS routers perform multiple operations on the packets, such as push to add a label, swap to update a label, and pop to remove a label. MPLS allows defining actions locally based on one or multiple labels in RHEL. You can configure routers and set traffic control ( tc ) filters to take appropriate actions on the packets based on the MPLS label stack entry ( lse ) elements, such as label , traffic class , bottom of stack , and time to live . For example, the following command adds a filter to the enp0s1 network interface to match incoming packets having the first label 12323 and the second label 45832 . On matching packets, the following actions are taken: the first MPLS TTL is decremented (packet is dropped if TTL reaches 0) the first MPLS label is changed to 549386 the resulting packet is transmitted over enp0s2 , with destination MAC address 00:00:5E:00:53:01 and source MAC address 00:00:5E:00:53:02 (BZ#1814836, BZ#1856415 ) act_mpls module available as a Technology Preview The act_mpls module is now available in the kernel-modules-extra rpm as a Technology Preview. The module allows the application of Multiprotocol Label Switching (MPLS) actions with Traffic Control (TC) filters, for example, push and pop MPLS label stack entries with TC filters. The module also allows the Label, Traffic Class, Bottom of Stack, and Time to Live fields to be set independently. (BZ#1839311) Improved Multipath TCP support is available as a Technology Preview Multipath TCP (MPTCP) improves resource usage within the network and resilience to network failure. For example, with Multipath TCP on the RHEL server, smartphones with MPTCP v1 enabled can connect to an application running on the server and switch between Wi-Fi and cellular networks without interrupting the connection to the server. RHEL 8.4 offers additional features, such as: Multiple concurrent active substreams Active-backup support Improved stream performances Better memory usage, with receive and send buffer auto-tuning SYN cookie support Note that either the applications running on the server must natively support MPTCP or administrators must load an eBPF program into the kernel to dynamically change IPPROTO_TCP to IPPROTO_MPTCP . For further details see, Getting started with Multipath TCP . (JIRA:RHELPLAN-57712) The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. ( BZ#1906489 ) The nispor package is now available as a Technology Preview The nispor package is now available as a Technology Preview, which is a unified interface for Linux network state querying. It provides a unified way to query all running network status through the python and C api, and rust crate. nispor works as the dependency in the nmstate tool. You can install the nispor package as a dependency of nmstate or as an individual package. To install nispor as an individual package, enter: To install nispor as a dependency of nmstate , enter: nispor is listed as the dependency. For more information on using nispor , refer to /usr/share/doc/nispor/README.md file. (BZ#1848817) 8.3. Kernel The kexec fast reboot feature is available as Technology Preview The kexec fast reboot feature continues to be available as a Technology Preview. kexec fast reboot significantly speeds the boot process by allowing the kernel to boot directly into the second kernel without passing through the Basic Input/Output System (BIOS) first. To use this feature: Load the kexec kernel manually. Reboot the operating system. ( BZ#1769727 ) The accel-config package available as a Technology Preview The accel-config package is now available on Intel EM64T and AMD64 architectures for RHEL 8.4 as a Technology Preview. This package helps in controlling and configuring data-streaming accelerator (DSA) sub-system in the Linux Kernel. Also, it configures devices via sysfs (pseudo-filesystem), saves and loads the configuration in the json format. (BZ#1843266) SGX available as a Technology Preview Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. This release initiates the kernel support for SGX v1 and v1.5. The version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. (BZ#1660337) eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which supports creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf(2) manual page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: bpftrace , a high-level tracing language that utilizes the eBPF virtual machine. AF_XDP , a socket for connecting the eXpress Data Path (XDP) path to user space for applications that prioritize packet processing performance. (BZ#1559616) The data streaming accelerator driver for kernel is available as a Technology Preview The data streaming accelerator (DSA) driver for the kernel is currently available as a Technology Preview. DSA is an Intel CPU integrated accelerator and supports a shared work queue with process address space ID (pasid) submission and shared virtual memory (SVM). (BZ#1837187) Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol which implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which supports two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. (BZ#1605216) 8.4. File systems and storage NVMe/TCP is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme-tcp.ko and nvmet-tcp.ko kernel modules have been added as a Technology Preview. The use of NVMe/TCP as either a storage client or a target is manageable with tools provided by the nvme-cli and nvmetcli packages. The NVMe/TCP target Technology Preview is included only for testing purposes and is not currently planned for full support. (BZ#1696451) File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1627455) OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . (BZ#1690207) Stratis is now available as a Technology Preview Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . RHEL 8.3 updated Stratis to version 2.1.0. For more information, see Stratis 2.1.0 Release Notes . (JIRA:RHELPLAN-1212) IdM now supports setting up a Samba server on an IdM domain member as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . (JIRA:RHELPLAN-13195) 8.5. High availability and clusters Local mode version of pcs cluster setup command available as a Technology Preview By default, the pcs cluster setup command automatically synchronizes all configuration files to the cluster nodes. Since Red Hat Enterprise Linux 8.3, the pcs cluster setup command provides the --corosync-conf option as a Technology Preview. Specifying this option switches the command to local mode. In this mode, pcs creates a corosync.conf file and saves it to a specified file on the local node only, without communicating with any other node. This allows you to create a corosync.conf file in a script and handle that file by means of the script. ( BZ#1839637 ) Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on Podman, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack. (BZ#1619620) Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. ( BZ#1784200 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1775847) 8.6. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as a Technology Preview. Previously, the IdM API was enhanced to enable multiple versions of API commands. These enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers can use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . ( BZ#1664719 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. ( BZ#1664718 ) ACME available as a Technology Preview The Automated Certificate Management Environment (ACME) service is now available in Identity Management (IdM) as a Technology Preview. ACME is a protocol for automated identifier validation and certificate issuance. Its goal is to improve security by reducing certificate lifetimes and avoiding manual processes from certificate lifecycle management. In RHEL, the ACME service uses the Red Hat Certificate System (RHCS) PKI ACME responder. The RHCS ACME subsystem is automatically deployed on every certificate authority (CA) server in the IdM deployment, but it does not service requests until the administrator enables it. RHCS uses the acmeIPAServerCert profile when issuing ACME certificates. The validity period of issued certificates is 90 days. Enabling or disabling the ACME service affects the entire IdM deployment. Important It is recommended to enable ACME only in an IdM deployment where all servers are running RHEL 8.4 or later. Earlier RHEL versions do not include the ACME service, which can cause problems in mixed-version deployments. For example, a CA server without ACME can cause client connections to fail, because it uses a different DNS Subject Alternative Name (SAN). Warning Currently, RHCS does not remove expired certificates. Because ACME certificates expire after 90 days, the expired certificates can accumulate and this can affect performance. To enable ACME across the whole IdM deployment, use the ipa-acme-manage enable command: To disable ACME across the whole IdM deployment, use the ipa-acme-manage disable command: To check whether the ACME service is installed and if it is enabled or disabled, use the ipa-acme-manage status command: (JIRA:RHELPLAN-58596) 8.7. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is now available for the 64-bit ARM architecture as a Technology Preview. This enables administrators to configure and manage servers from a graphical user interface (GUI) remotely, using the VNC session. As a consequence, new administration applications are available on the 64-bit ARM architecture. For example: Disk Usage Analyzer ( baobab ), Firewall Configuration ( firewall-config ), Red Hat Subscription Manager ( subscription-manager ), or the Firefox web browser. Using Firefox , administrators can connect to the local Cockpit daemon remotely. (JIRA:RHELPLAN-27394, BZ#1667225, BZ#1667516, BZ#1724302 ) GNOME desktop on IBM Z is available as a Technology Preview The GNOME desktop, including the Firefox web browser, is now available as a Technology Preview on the IBM Z architecture. You can now connect to a remote graphical session running GNOME using VNC to configure and manage your IBM Z servers. (JIRA:RHELPLAN-27737) 8.8. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. (BZ#1698565) Intel Tiger Lake graphics available as a Technology Preview Intel Tiger Lake UP3 and UP4 Xe graphics are now available as a Technology Preview. To enable hardware acceleration with Intel Tiger Lake graphics, add the following option on the kernel command line: In this option, replace pci-id with one of the following: The PCI ID of your Intel GPU The * character to enable the i915 driver with all alpha-quality hardware (BZ#1783396) 8.9. Red Hat Enterprise Linux system roles HA Cluster RHEL system role available as a Technology Preview The High Availability Cluster (HA Cluster) role is now available as a Technology Preview. Currently, the following notable configurations are available: Configuring clusters running no fencing and no resources Configuring multi-link clusters Configuring custom cluster names and node names Configuring whether clusters start automatically on boot ( BZ#1893743 ) The postfix role of RHEL system roles available as a Technology Preview Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. The postfix role is available as a Technology Preview. The following roles are fully supported: kdump network selinux storage timesync For more information, see the Knowledgebase article about RHEL system roles . ( BZ#1812552 ) 8.10. Virtualization KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization (BZ#1519039) AMD SEV for KVM virtual machines As a Technology Preview, RHEL 8 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware. Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 509 running VMs using SEV. Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM's XML configuration: The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater. (BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677) Intel vGPU As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, it is possible to enable a VNC console operated by Intel vGPU. By enabling it, users can connect to a VNC console of the VM and see the VM's desktop hosted by Intel vGPU. However, this currently only works for RHEL guest operating systems. (BZ#1528684) Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, and IBM Z systems hosts with RHEL 8. With this feature, a RHEL 7 or RHEL 8 VM that runs on a physical RHEL 8 host can act as a hypervisor, and host its own VMs. (JIRA:RHELPLAN-14047, JIRA:RHELPLAN-24437) Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine The feature is currently supported with Microsoft Windows Server 2019 and 2016. (BZ#1348508) ESXi hypervisor and SEV-ES available as a Technology Preview for RHEL VMs As a Technology Preview, in RHEL 8.4 and later, you can enable the AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) to secure RHEL virtual machines (VMs) on VMware's ESXi hypervisor, versions 7.0.2 and later. (BZ#1904496) 8.11. Containers CNI plugins are available in Podman as a Technology Preview CNI plugins are now available to use in Podman rootless mode as a Technology Preview. To enable this feature, users are required to build their own rootless CNI infrastructure container image. ( BZ#1932083 ) The crun is available as a Technology Preview The crun OCI runtime is now available for the container-tools:rhel8 module as a Technology Preview. The crun container runtime supports an annotation that allows the container to access the rootless user's additional groups. This is useful for volume mounting in a directory where setgid is set, or where the user only has group access. Currently, neither the crun or runc runtimes fully support cgroupsv2 . (BZ#1841438) A podman container image is available as a Technology Preview The registry.redhat.io/rhel8/podman container image is a containerized implementation of the podman package. The podman tool is used for managing containers and images, volumes mounted into those containers, and pods made of groups of containers. (JIRA:RHELPLAN-56659) The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. (JIRA:RHELDOCS-16861)
[ "ip link add dev bareudp0 type bareudp dstport 6635 ethertype mpls_uc", "tc qdisc add dev enp1s0 ingress tc filter add dev enp1s0 ingress proto mpls_uc matchall > action tunnel_key set src_ip 2001:db8::22 dst_ip 2001:db8::21 id 0 > action mirred egress redirect dev bareudp0", "tc filter add dev enp0s1 ingress protocol mpls_uc flower mpls lse depth 1 label 12323 lse depth 2 label 45832 action mpls dec_ttl pipe action mpls modify label 549386 pipe action pedit ex munge eth dst set 00:00:5E:00:53:01 pipe action pedit ex munge eth src set 00:00:5E:00:53:02 pipe action mirred egress redirect dev enp0s2", "yum install nispor", "yum install nmstate", "xfs_info /mount-point | grep ftype", "ipa-acme-manage enable The ipa-acme-manage command was successful", "ipa-acme-manage disable The ipa-acme-manage command was successful", "ipa-acme-manage status ACME is enabled The ipa-acme-manage command was successful", "i915.force_probe= pci-id", "<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/technology_previews
7.8. Creating a Bond Connection Using a GUI
7.8. Creating a Bond Connection Using a GUI You can use the GNOME control-center utility to direct NetworkManager to create a Bond from two or more Wired or InfiniBand connections. It is not necessary to create the connections to be bonded first. They can be configured as part of the process to configure the bond. You must have the MAC addresses of the interfaces available in order to complete the configuration process. 7.8.1. Establishing a Bond Connection Procedure 7.1. Adding a New Bond Connection_Using nm-connection-editor Follow the below steps to create a new bond connection. Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select Bond and click Create . The Editing Bond connection 1 window appears. Figure 7.6. The NetworkManager Graphical User Interface Add a Bond menu On the Bond tab, click Add and select the type of interface you want to use with the bond connection. Click the Create button. Note that the dialog to select the port type only comes up when you create the first port; after that, it will automatically use that same type for all further ports. The Editing bond0 slave 1 window appears. Use the Device MAC address drop-down menu to select the MAC address of the interface to be bonded. The first port's MAC address will be used as the MAC address for the bond interface. If required, enter a clone MAC address to be used as the bond's MAC address. Click the Save button. Figure 7.7. The NetworkManager Graphical User Interface Add a Bond Connection menu The name of the bonded port appears in the Bonded connections window. Click the Add button to add further port connections. Review and confirm the settings and then click the Save button. Edit the bond-specific settings by referring to Section 7.8.1.1, "Configuring the Bond Tab" below. Procedure 7.2. Editing an Existing Bond Connection Follow these steps to edit an existing bond connection. Enter nm-connection-editor in a terminal: Select the connection you want to edit and click the Edit button. Select the General tab. Configure the connection name, auto-connect behavior, and availability settings. Five settings in the Editing dialog are common to all connection types, see the General tab: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the menu of the Network window. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. All users may connect to this network - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Automatically connect to VPN when using this connection - Select this box if you want NetworkManager to auto-connect to a VPN connection when it is available. Select the VPN from the drop-down menu. Firewall Zone - Select the firewall zone from the drop-down menu. See the Red Hat Enterprise Linux 7 Security Guide for more information on firewall zones. Edit the bond-specific settings by referring to Section 7.8.1.1, "Configuring the Bond Tab" below. Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your bond connection, click the Save button to save your customized configuration. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" or IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . 7.8.1.1. Configuring the Bond Tab If you have already added a new bond connection (see Procedure 7.1, "Adding a New Bond Connection_Using nm-connection-editor" for instructions), you can edit the Bond tab to set the load sharing mode and the type of link monitoring to use to detect failures of a port connection. Mode The mode that is used to share traffic over the port connections which make up the bond. The default is Round-robin . Other load sharing modes, such as 802.3ad , can be selected by means of the drop-down list. Link Monitoring The method of monitoring the ports ability to carry network traffic. The following modes of load sharing are selectable from the Mode drop-down list: Round-robin Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded port interface beginning with the first one available. This mode might not work behind a bridge with virtual machines without additional switch configuration. Active backup Sets an active-backup policy for fault tolerance. Transmissions are received and sent out through the first available bonded port interface. Another bonded port interface is only used if the active bonded port interface fails. Note that this is the only mode available for bonds of InfiniBand devices. XOR Sets an XOR (exclusive-or) policy. Transmissions are based on the selected hash policy. The default is to derive a hash by XOR of the source and destination MAC addresses multiplied by the modulo of the number of port interfaces. In this mode traffic destined for specific peers will always be sent over the same interface. As the destination is determined by the MAC addresses this method works best for traffic to peers on the same link or local network. If traffic has to pass through a single router then this mode of traffic balancing will be suboptimal. Broadcast Sets a broadcast policy for fault tolerance. All transmissions are sent on all port interfaces. This mode might not work behind a bridge with virtual machines without additional switch configuration. 802.3ad Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all ports in the active aggregator. Requires a network switch that is 802.3ad compliant. Adaptive transmit load balancing Sets an adaptive Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each port interface. Incoming traffic is received by the current port. If the receiving port fails, another port takes over the MAC address of the failed port. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. Adaptive load balancing Sets an Adaptive Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPv4 traffic. Receive load balancing is achieved through ARP negotiation. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. The following types of link monitoring can be selected from the Link Monitoring drop-down list. It is a good idea to test which channel bonding module parameters work best for your bonded interfaces. MII (Media Independent Interface) The state of the carrier wave of the interface is monitored. This can be done by querying the driver, by querying MII registers directly, or by using ethtool to query the device. Three options are available: Monitoring Frequency The time interval, in milliseconds, between querying the driver or MII registers. Link up delay The time in milliseconds to wait before attempting to use a link that has been reported as up. This delay can be used if some gratuitous ARP requests are lost in the period immediately following the link being reported as " up " . This can happen during switch initialization for example. Link down delay The time in milliseconds to wait before changing to another link when a previously active link has been reported as " down " . This delay can be used if an attached switch takes a relatively long time to change to backup mode. ARP The address resolution protocol ( ARP ) is used to probe one or more peers to determine how well the link-layer connections are working. It is dependent on the device driver providing the transmit start time and the last receive time. Two options are available: Monitoring Frequency The time interval, in milliseconds, between sending ARP requests. ARP targets A comma separated list of IP addresses to send ARP requests to.
[ "~]USD nm-connection-editor", "~]USD nm-connection-editor" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-creating_a_bond_connection_using_a_gui
Chapter 9. Configuring access to Knative services
Chapter 9. Configuring access to Knative services 9.1. Configuring JSON Web Token authentication for Knative services OpenShift Serverless does not currently have user-defined authorization features. To add user-defined authorization to your deployment, you must integrate OpenShift Serverless with Red Hat OpenShift Service Mesh, and then configure JSON Web Token (JWT) authentication and sidecar injection for Knative services. 9.2. Using JSON Web Token authentication with Service Mesh 2.x You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 2.x and OpenShift Serverless. To do this, you must create authentication requests and policies in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service. 9.2.1. Configuring JSON Web Token authentication for Service Mesh 2.x and OpenShift Serverless Important Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress , is not supported when Kourier is enabled. For OpenShift Container Platform, if you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively . Prerequisites You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Add the sidecar.istio.io/inject="true" annotation to your service: Example service apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 1 sidecar.istio.io/rewriteAppHTTPProbers: "true" 2 ... 1 Add the sidecar.istio.io/inject="true" annotation. 2 You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default. Apply the Service resource: USD oc apply -f <filename> Create a RequestAuthentication resource in each serverless application namespace that is a member in the ServiceMeshMemberRoll object: apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: <namespace> spec: jwtRules: - issuer: [email protected] jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json Apply the RequestAuthentication resource: USD oc apply -f <filename> Allow access to the RequestAuthenticaton resource from system pods for each serverless application namespace that is a member in the ServiceMeshMemberRoll object, by creating the following AuthorizationPolicy resource: apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allowlist-by-paths namespace: <namespace> spec: action: ALLOW rules: - to: - operation: paths: - /metrics 1 - /healthz 2 1 The path on your application to collect metrics by system pod. 2 The path on your application to probe by system pod. Apply the AuthorizationPolicy resource: USD oc apply -f <filename> For each serverless application namespace that is a member in the ServiceMeshMemberRoll object, create the following AuthorizationPolicy resource: apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: <namespace> spec: action: ALLOW rules: - from: - source: requestPrincipals: ["[email protected]/[email protected]"] Apply the AuthorizationPolicy resource: USD oc apply -f <filename> Verification If you try to use a curl request to get the Knative service URL, it is denied: Example command USD curl http://hello-example-1-default.apps.mycluster.example.com/ Example output RBAC: access denied Verify the request with a valid JWT. Get the valid JWT token: USD TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo "USDTOKEN" | cut -d '.' -f2 - | base64 --decode - Access the service by using the valid token in the curl request header: USD curl -H "Authorization: Bearer USDTOKEN" http://hello-example-1-default.apps.example.com The request is now allowed: Example output Hello OpenShift! 9.3. Using JSON Web Token authentication with Service Mesh 1.x You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 1.x and OpenShift Serverless. To do this, you must create a policy in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service. 9.3.1. Configuring JSON Web Token authentication for Service Mesh 1.x and OpenShift Serverless Important Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress , is not supported when Kourier is enabled. For OpenShift Container Platform, if you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively . Prerequisites You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Add the sidecar.istio.io/inject="true" annotation to your service: Example service apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 1 sidecar.istio.io/rewriteAppHTTPProbers: "true" 2 ... 1 Add the sidecar.istio.io/inject="true" annotation. 2 You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default. Apply the Service resource: USD oc apply -f <filename> Create a policy in a serverless application namespace which is a member in the ServiceMeshMemberRoll object, that only allows requests with valid JSON Web Tokens (JWT): Important The paths /metrics and /healthz must be included in excludedPaths because they are accessed from system pods in the knative-serving namespace. apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: default namespace: <namespace> spec: origins: - jwt: issuer: [email protected] jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json" triggerRules: - excludedPaths: - prefix: /metrics 1 - prefix: /healthz 2 principalBinding: USE_ORIGIN 1 The path on your application to collect metrics by system pod. 2 The path on your application to probe by system pod. Apply the Policy resource: USD oc apply -f <filename> Verification If you try to use a curl request to get the Knative service URL, it is denied: USD curl http://hello-example-default.apps.mycluster.example.com/ Example output Origin authentication failed. Verify the request with a valid JWT. Get the valid JWT token: USD TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo "USDTOKEN" | cut -d '.' -f2 - | base64 --decode - Access the service by using the valid token in the curl request header: USD curl http://hello-example-default.apps.mycluster.example.com/ -H "Authorization: Bearer USDTOKEN" The request is now allowed: Example output Hello OpenShift!
[ "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2", "oc apply -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: <namespace> spec: jwtRules: - issuer: [email protected] jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json", "oc apply -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allowlist-by-paths namespace: <namespace> spec: action: ALLOW rules: - to: - operation: paths: - /metrics 1 - /healthz 2", "oc apply -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: <namespace> spec: action: ALLOW rules: - from: - source: requestPrincipals: [\"[email protected]/[email protected]\"]", "oc apply -f <filename>", "curl http://hello-example-1-default.apps.mycluster.example.com/", "RBAC: access denied", "TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -", "curl -H \"Authorization: Bearer USDTOKEN\" http://hello-example-1-default.apps.example.com", "Hello OpenShift!", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2", "oc apply -f <filename>", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: default namespace: <namespace> spec: origins: - jwt: issuer: [email protected] jwksUri: \"https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json\" triggerRules: - excludedPaths: - prefix: /metrics 1 - prefix: /healthz 2 principalBinding: USE_ORIGIN", "oc apply -f <filename>", "curl http://hello-example-default.apps.mycluster.example.com/", "Origin authentication failed.", "TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -", "curl http://hello-example-default.apps.mycluster.example.com/ -H \"Authorization: Bearer USDTOKEN\"", "Hello OpenShift!" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/configuring-access-to-knative-services
Chapter 22. Limiting storage space usage on XFS with quotas
Chapter 22. Limiting storage space usage on XFS with quotas You can restrict the amount of disk space available to users or groups by implementing disk quotas. You can also define a warning level at which system administrators are informed before a user consumes too much disk space or a partition becomes full. The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas control or report on usage of these items on a user, group, or directory or project level. Group and project quotas are only mutually exclusive on older non-default XFS disk formats. When managing on a per-directory or per-project basis, XFS manages the disk usage of directory hierarchies associated with a specific project. 22.1. Disk quotas In most computing environments, disk space is not infinite. The quota subsystem provides a mechanism to control usage of disk space. You can configure disk quotas for individual users as well as user groups on the local file systems. This makes it possible to manage the space allocated for user-specific files (such as email) separately from the space allocated to the projects that a user works on. The quota subsystem warns users when they exceed their allotted limit, but allows some extra space for current work (hard limit/soft limit). If quotas are implemented, you need to check if the quotas are exceeded and make sure the quotas are accurate. If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator can either help the user determine how to use less disk space or increase the user's disk quota. You can set quotas to control: The number of consumed disk blocks. The number of inodes, which are data structures that contain information about files in UNIX file systems. Because inodes store file-related information, this allows control over the number of files that can be created. 22.2. The xfs_quota tool You can use the xfs_quota tool to manage quotas on XFS file systems. In addition, you can use XFS file systems with limit enforcement turned off as an effective disk usage accounting system. The XFS quota system differs from other file systems in a number of ways. Most importantly, XFS considers quota information as file system metadata and uses journaling to provide a higher level guarantee of consistency. Additional resources xfs_quota(8) man page on your system 22.3. File system quota management in XFS The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas control or report on usage of these items on a user, group, or directory or project level. Group and project quotas are only mutually exclusive on older non-default XFS disk formats. When managing on a per-directory or per-project basis, XFS manages the disk usage of directory hierarchies associated with a specific project. 22.4. Enabling disk quotas for XFS Enable disk quotas for users, groups, and projects on an XFS file system. Once quotas are enabled, the xfs_quota tool can be used to set limits and report on disk usage. Procedure Enable quotas for users: Replace uquota with uqnoenforce to allow usage reporting without enforcing any limits. Enable quotas for groups: Replace gquota with gqnoenforce to allow usage reporting without enforcing any limits. Enable quotas for projects: Replace pquota with pqnoenforce to allow usage reporting without enforcing any limits. Alternatively, include the quota mount options in the /etc/fstab file. The following example shows entries in the /etc/fstab file to enable quotas for users, groups, and projects, respectively, on an XFS file system. These examples also mount the file system with read/write permissions: Additional resources xfs(5) and xfs_quota(8) man pages on your system 22.5. Reporting XFS usage Use the xfs_quota tool to set limits and report on disk usage. By default, xfs_quota is run interactively, and in basic mode. Basic mode subcommands simply report usage, and are available to all users. Prerequisites Quotas have been enabled for the XFS file system. See Enabling disk quotas for XFS . Procedure Start the xfs_quota shell: Show usage and limits for the given user: Show free and used counts for blocks and inodes: Run the help command to display the basic commands available with xfs_quota . Specify q to exit xfs_quota . Additional resources xfs_quota(8) man page on your system 22.6. Modifying XFS quota limits Start the xfs_quota tool with the -x option to enable expert mode and run the administrator commands, which allow modifications to the quota system. The subcommands of this mode allow actual configuration of limits, and are available only to users with elevated privileges. Prerequisites Quotas have been enabled for the XFS file system. See Enabling disk quotas for XFS . Procedure Start the xfs_quota shell with the -x option to enable expert mode: Report quota information for a specific file system: For example, to display a sample quota report for /home (on /dev/blockdevice ), use the command report -h /home . This displays output similar to the following: Modify quota limits: For example, to set a soft and hard inode count limit of 500 and 700 respectively for user john , whose home directory is /home/john , use the following command: In this case, pass mount_point which is the mounted xfs file system. Display the expert commands available with xfs_quota -x : Verification Verify that the quota limits have been modified: Additional resources xfs_quota(8) man page on your system 22.7. Setting project limits for XFS Configure limits for project-controlled directories. Procedure Add the project-controlled directories to /etc/projects . For example, the following adds the /var/log path with a unique ID of 11 to /etc/projects . Your project ID can be any numerical value mapped to your project. Add project names to /etc/projid to map project IDs to project names. For example, the following associates a project called logfiles with the project ID of 11 as defined in the step. Initialize the project directory. For example, the following initializes the project directory /var : Configure quotas for projects with initialized directories: Additional resources xfs_quota(8) , projid(5) , and projects(5) man pages on your system
[ "mount -o uquota /dev/xvdb1 /xfs", "mount -o gquota /dev/xvdb1 /xfs", "mount -o pquota /dev/xvdb1 /xfs", "vim /etc/fstab /dev/xvdb1 /xfs xfs rw,quota 0 0 /dev/xvdb1 /xfs xfs rw,gquota 0 0 /dev/xvdb1 /xfs xfs rw,prjquota 0 0", "xfs_quota", "xfs_quota> quota username", "xfs_quota> df", "xfs_quota> help", "xfs_quota> q", "xfs_quota -x / path", "xfs_quota> report / path", "User quota on /home (/dev/blockdevice) Blocks User ID Used Soft Hard Warn/Grace ---------- --------------------------------- root 0 0 0 00 [------] testuser 103.4G 0 0 00 [------]", "xfs_quota> limit isoft= 500m ihard= 700m user", "xfs_quota -x -c 'limit isoft=500 ihard=700 john' /home/", "xfs_quota> help", "xfs_quota> report -i -u User quota on /home (/dev/loop0) Inodes User ID Used Soft Hard Warn/ Grace ---------- -------------------------------------------------- root 3 0 0 00 [------] testuser 2 500 700 00 [------]", "echo 11:/var/log >> /etc/projects", "echo logfiles:11 >> /etc/projid", "xfs_quota -x -c 'project -s logfiles' /var", "xfs_quota -x -c 'limit -p bhard=1g logfiles' /var" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/assembly_limiting-storage-space-usage-on-xfs-with-quotas_managing-file-systems
Chapter 14. Networking
Chapter 14. Networking NetworkManager rebased to version 1.8 The NetworkManager package has been upgraded to upstream version 1.8, which provides a number of bug fixes and enhancements over the version. Notable changes include: Support for additional route options has been added. Managed state of device until reboot has been persisted. Devices that are externally managed are now correctly handled. Networked reliability on multihomed hosts has been enhanced. Hostname management is now more flexibly configured. Support for changing and enforcing 802-3 link properties has been added. (BZ# 1414103 ) NetworkManager now supports additional features for routes With this update, NetworkManager can set some advanced options: source_address (src, IPv4 only), from , type_of_service (tos), window , maximum_transmission_unit (mtu), congestion_window (cwnd), initial_congestion_window (initcwnd), and initial_receiver_window (initrwnd) for static IPv4 and IPv6 routes of connections. (BZ#1373698) NetworkManager now better handles devices state With this update, NetworkManager now maintains the state of devices after the service restart and takes over interfaces which are set into managed mode during restart. In addition, NetworkManager can handle devices which are not explicitly set as unmanaged but controlled manually by the user or another network service. (BZ# 1394579 ) NetworkManager now supports MACsec (IEEE 802.1AE) This update adds support for configuring Media Access Control Security (MACsec) encryption into NetworkManager . (BZ#1337997) NetworkManager now supports changing and enforcing 802-3 link properties Previously, NetworkManager only exposed 802-3 link properties : 802-3-ethernet.speed , 802-3 ethernet.duplex , and 802-3-ethernet.auto-negotiate . With this update, it is possible to change and enforce them. You can either do this automatically using auto-negotiate=yes , or manually using auto-negotiate=no , speed=<Mbit/s> , duplex=[half,full] . Note that if auto-negotiate=no and either speed or duplex are not set, then the link negotiation is skipped and the auto-negotiate=no, speed=0, duplex=NULL default values are preserved. Note also that the auto-negotiate default value has been changed from yes to no to preserve backward compatibility. Previously, the property was ignored, but now an auto-negotiate value of yes can enforce link negotiation. Setting it to no with speed and/or duplex unset means that link negotiation is ignored. (BZ#1353612) NetworkManager now supports ordering bond slaves based on device names Previously, the existing order of activation for slave connections could cause problems determining the MAC address of the master interface. This update adds more predictable ordering based on device names. You can enable the new ordering using the slaves-order=name setting in NetworkManager configuration. Note that the new ordering is disabled by default and must be explicitly enabled. (BZ# 1420708 ) NetworkManager now supports VFs for SR-IOV devices With this update, the NetworkManager system service supports creating virtual functions (VFs) for Single Root I/O Virtualization (SR-IOV) PCI devices. The number of VFs can be specified using the sriov-num-vfs option in the device section of the NetworkManager configuration file. After VFs are created, NetworkManager can activate connection profiles on them. Note that some properties of a VF interface, such as the Maximum Transmission Unit (MTU), can only be set to values compatible with those that are set on the physical interface. (BZ# 1398934 ) Kernel GRE rebased to version 4.8 Kernel Generic Routing Encapsulation (GRE) tunneling has been updated to upstream version 4.8, which provides a number of bug fixes and enhancements over the version. The most notable changes include: Code merge for transmit and receive paths for IPv4 GRE and IPv6 GRE Enhancements that allow link layer address changes without bringing the gre (IPv4 GRE) or ip6gre (IPv6 GRE) device down Support for various offloads such as checksum , scatter-gather , highdma , gso , or gro , for IPv6 GRE traffic Automatic kernel module loading when adding ip6gretap devices Miscellaneous tunneling fixes (such as error handling, MTU calculation, path MTU discovery) up to Linux kernel version 4.8 that affect GRE tunnels (BZ#1369158) dnsmasq rebased to version 2.76 The dnsmasq packages have been upgraded to version 2.76, which provides a number of bug fixes and enhancements. Notable changes include the following: The dhcp_release6 utility is now supported. The ra-param option has been added. Support for the RFC-4242 information-refresh-time options in the reply to the DHCPv6 information request has been added. The ra-advrouter mode for RFC-3775-compliant mobile IPv6 support has been added. The script-arp script has been added and two new functions for the dhcp-script script have been included. It is now possible to use random addresses for DHCPv6 temporary address allocations, instead of algorithmically determined stable addresses. New optional DNS Security Extensions (DNSSEC) support has been disabled. dnsmasq can change the default values of IPv6 Router Advertisement. As a result, the ra-param option is used to change the default priorities and time intervals of routes advertised by dnsmasq . See the dnsmasq(1) man page for more information. (BZ# 1375527 , BZ# 1398337 ) BIND changes the way it handles URI resource records, impacting also URI backward compatibility With this update, the BIND suite no longer adds an additional length byte to a value field when using a URI resource record. This also means that BIND in Red Hat Enterprise Linux (RHEL) 7.4 communicates only in the format described in RFC 7553: https://tools.ietf.org/html/rfc7553 . Note that this update makes new URI records incompatible with records created using BIND in versions of RHEL. Namely, BIND in RHEL 7.4 cannot: Understand URI records provided by versions of BIND in RHEL. Serve URI records to clients using versions of BIND in RHEL. However, BIND in RHEL 7.4 still can: Cache and receive records from both earlier and future versions of BIND in RHEL. Serve records in the old URI format encoded as Unknown DNS Resource Record. See RFC 3597 for details: https://tools.ietf.org/html/rfc3597 . After this update, you do not need to make any change to the DNS zone files. (BZ# 1388534 ) A DHCP client hook example added for DDNS for Microsoft Azure cloud An example of the DHCP client hook for Dynamic DNS (DDNS) for Microsoft Azure cloud has been added to the dhclient package. The administrator can now easily enable this hook, and register Red Hat Enterprise Linux clients with a DDNS server. (BZ#1374119) dhcp_release6 now releases IPv6 addresses With this update, the dhcp_release6 utility can release Dynamic Host Configuration Protocol version 6 (DHCPv6) leases for IPv6 addresses on the local dnsmasq server. See the dhcp_release6(1) man page for more information about the dhcp_release6 command. (BZ# 1375569 ) Sendmail now supports ECDHE This update adds the Elliptic Curve Diffie-Hellman Ephemeral Keys (ECDHE) support to Red Hat Enterprise Linux 7 Sendmail . ECDHE is a variant of the Diffie-Hellman protocol that uses elliptic curve cryptography. It is an anonymous key agreement protocol that allows two parties to establish a shared secret over an insecure channel. (BZ# 1124827 ) telnet now supports the -6 option With this update, the telnet utility supports the -6 option to test IPv6 connections. (BZ# 1367415 ) Adjustable TTL limit for caching negative DNS responses in Unbound This update adds the cache-max-negative-ttl configuration option for the Unbound service, which enables adjustment of the maximum TTL specifically for caching negative DNS responses. Previously, this limit was determined by the domain SOA record, or it was automatically the same as the maximum TTL limit for caching all DNS responses, if configured. Note that if Unbound is determining the TTL for DNS response caching, the value set for the cache-min-ttl option has precedence over the value specified by cache-max-negative-ttl . (BZ#1382383) The scalability of UDP sockets has been improved This update improves UDP forward memory accounting and reduces the lock contention of UDP sockets. As a result, the overall ingress throughput of UDP sockets receiving traffic from multiple peers is considerably increased without any outward functional changes. (BZ#1388467) IP now supports IP_BIND_ADDRESS_NO_PORT in the kernel This update adds the IP_BIND_ADDRESS_NO_PORT socket option to the kernel. This allows the kernel to skip L4 tuple reservation when a bind() request is used to a port number of 0 . As a result, many simultaneous connections to different destination hosts can be maintained. (BZ#1374498) IPVS Source Hash scheduling now supports L4 hashing and SH fallback With this update, the IP Virtual Server (IPVS) Source Hash scheduling algorithm includes: L4 hashing SH fallback of requests to the active server in case the destination server has a weight of 0 , which indicates that the destination server is inactive. As a result, it is now possible to balance the load of requests from one source IP address based on port numbers. Requests to inactive servers no longer time out. (BZ#1365002) iproute now supports changing bridge port options With this update, changing bridge port options such as state , priority , and cost have been added to the iproute package. As a result, iproute can be used as an alternative to the bridge-utils package. (BZ#1373971) New options of Sockets API Extensions for SCTP (RFC 6458) implemented This update implements options SCTP_SNDINFO , SCTP_NXTINFO , SCTP_NXTINFO and SCTP_DEFAULT_SNDINFO to the Sockets API Extensions for the Stream Control Transmission Protocol (RFC 6458). These new options replace the options SCTP_SNDRCV , SCTP_EXTRCV and SCTP_DEFAULT_SEND_PARAM , which are now deprecated. See also the deprecated functionality section. (BZ#1339791) ss now supports SCTP sockets list Previously, the netstat utility provided a list of Stream Control Transmission Protocol (SCTP) sockets. With this update, the ss utility is able to display the same list. (BZ# 1063934 ) wpa_supplicant rebased to version 2.6 The wpa_supplicant packages have been upgraded to upstream version 2.6, which provides a number of bug fixes and enhancements. Notably, the wpa_supplicant utility now supports the Media Access Control Security (MACsec) encryption 802.1AE, which enables MACsec to be used in configuration by default. (BZ# 1404793 , BZ#1338005) Linux kernel now contains the switchdev infrastructure and mlxsw This update backports the following functionality into the Linux kernel: The Ethernet switch device driver model - the switchdev infrastructure; as a result, switch devices can now offload forwarding data plane from the kernel The mlxsw driver Switch hardware supported by mlxsw : Mellanox SwitchX-2 (slow path only) Mellanox SwitchIB and SwitchIB-2 Mellanox Spectrum Features supported by mlxsw : Per port jumbo frames, speed setting, state setting, statistics Port splitting together with splitter cables Port mirroring QoS: 802.1p, Data Center Bridging (DCB) Access Control Lists (ACLs) using TC flower offloading have been introduced as a Technology Preview Layer 2 features: VLANs Spanning Tree Protocol (STP) Link Aggregation (LAG) using team or bonding offloading Link Layer Discovery Protocol (LLDP) Layer 3 features: Unicast routing To configure all these features, use standard tools provided by the iproute package that has been updated as well. (BZ# 1297841 , BZ#1275772, BZ#1414400, BZ#1434587, BZ#1434591) The Linux bridge code rebased to version 4.9 The Linux bridge code has been upgraded to upstream version 4.9, which provides a number of bug fixes and enhancements over the version. Notable changes include: Support for 802.1ad VLAN filtering and Tx VLAN acceleration Support for 802.11 Proxy Address Resolution Protocol (ARP) Support for switching offloading by using switchdev VLAN support for user mdb entries Support for extended attributes in mdb entries Support for temporary port router Support for per-VLAN statistics Support for Internet Group Management Protocol/Multicast Listener Discovery (IGMP/MLD) statistics All configuration settings supported by using sysfs are now supported by netlink as well Added per-port flag to control the unknown multicast flood (BZ#1352289) bind-dyndb-ldap rebased to version 11.1 The bind-dyndb-ldap package has been upgraded to upstream version 11.1, which provides a number of bug fixes and enhancements over the version. Notably, the /etc/named.conf file now uses the new DynDB API. Updating the bind-dyndb-ldap package automatically converts the file to the new API style. (BZ# 1393889 ) DynDB API from the upstream version 9.11.0 of BIND added to Red Hat Enterprise Linux This update backports the API for the dyndb system plug-in, which was introduced in the bind package version 9.11.0 in upstream. As a result, the bind-dyndb-ldap plug-in in Red Hat Enterprise Linux now uses the new API. The downstream feature dynamic_db , which was used in releases of Red Hat Enterprise Linux, is no longer supported. Because the upstream dyndb uses a different configuration syntax than the downstream dynamic_db , the syntax also changes with this update. However, you do not need to make any manual configuration changes. (BZ# 1393886 ) tboot rebased to version 1.9.5 The tboot packages have been upgraded to upstream version 1.9.5, which provides a number of bug fixes and enhancements over the version. Notable changes include: This update adds the 2nd generation of the Link Control Protocol (LCP) creation utility for Trusted Platform Module (TPM) 2.0, as well as a user guide for the updated LCP creation utility. A workaround has been implemented to ensure the correct behavior of Intel Platform Trust Technology (PTT) and the Linux PTT driver. New fields have been added in the Linux kernel header struct declaration, in order to accommodate for new capabilities of the Linux kernel. (BZ#1384210) Packages related to rdma consolidated by rebase into rdma-core version 13 The packages related to the rdma package have been upgraded and consolidated into a single source package, rdma-core version 13. The packages are: rdma iwpmd libibverbs librdmacm ibacm libibumad libocrdma libmlx4 libmlx5 libhfi1verbs libi40iw srp_daemon (formerly srptools) libmthca libcxgb3 libcxgb4 libnes libipathverbs librxe rdma-ndd The following, previously not included, packages have been added as part of the new package rdma-core : libqedr libhns libvmw_pvrdma All ibverbs hardware-specific provider libraries are now bundled in the libibverbs sub-package, streamlining installation and preventing possible versioning mismatches. (BZ#1404035) OVN IP address management support added for static MAC addresses This update adds support for dynamic IP address assignment with user-specified static MAC addresses. As a result, Open Virtual Network (OVN) users can now create configurations with dynamic IP that are associated with static MAC addresses. (BZ# 1368043 ) Enhanced networked reliability on multihomed hosts On interfaces with a route that is already present on another interface, the NetworkManager utility now automatically switches the reverse path filtering method from Strict to Loose . This enhances network reliability on multihomed host machines. (BZ# 1394344 ) Offloading of GENEVE, VXLAN, and GRE tunnels is now supported With this update, the infrastructure to support offloading of GENEVE, VXLAN, and GRE tunnels has been added. In addition, various bugs have been fixed in the GENEVE tunnel implementation. (BZ#1326309) LCO for tunnel traffic is now supported With this update, the Local Checksum Offloading (LCO) technique has been added to enable certain network cards to utilize checksum offloading for tunnel traffic. This enhancement improves the performance of VXLAN, GRE, and other tunnels. (BZ#1326318) Improved tunnel performance on NICs With this update, tunnel performance on some Network Interface Cards (NICs) that do not support tunnel offloads by default has been enhanced. As a result, users can now take advantage of existing hardware offloads on these NICs. (BZ#1326353) NPT is now supported in the kernel With this update, the IPv6-to-IPv6 Network Prefix Translation (NPTv6) function defined in RFC 6296 has been added in the Netfilter framework. As a result, it is now possible to enable NPT for stateless translation between IPv6 prefixes. (BZ#1432897) DNS configuration is now supported through the D-Bus API Previously, external applications could not easily retrieve the DNS parameters used by NetworkManager . With this update, DNS configuration has been supported through the D-Bus API. As a result, all DNS-related information, including name servers and domains, is available to client applications through the D-Bus API of NetworkManager . An example of such application is the nmcli tool, which can now display DNS configuration. (BZ# 1404594 ) PPP support is now moved into a separate package With this update, the Point-to-Point Protocol (PPP) support is moved into a separate, optional NetworkManager-ppp package. As a result, the dependency chain of NetworkManager is smaller and it is possible to limit the number of installed packages. Note that to configure PPP settings, you must make sure that the NetworkManager-ppp package is installed. (BZ# 1404598 ) The tc utility now supports flower The tc utility has been enhanced to use the kernel flower traffic control classifier. With this update, a user can add, modify, or delete flower classifier rules from an interface. (BZ# 1422629 ) Fix to the CRC32c value computation in SCTP forwarding path Previously, the kernel incorrectly computed the CRC32c value of Stream Control Transmission Protocol (SCTP) packets with offloaded checksum when the kernel forwarded them to an interface that did not support offloading. This update fixes the computation of CRC32c in the forwarding path. As a result, SCTP packets are now correctly transmitted in the described situation. (BZ#1072503) New packages: iperf3 This update adds the iperf3 packages version 3.1.7 to Red Hat Enterprise Linux 7. The iperf3 utility enables active measuring of the maximum achievable bandwidth on IP networks. (BZ#913329) Installation of OVN now supports easily-configurable firewalld rules This feature adds firewalld configuration rules for Open Virtual Network (OVN) to the openvswitch packages. As a result, the user can install easier OVN with firewalld enabled, instead of needing to create firewalld configuration manually. (BZ# 1390938 ) netlink now supports bridge master attributes With this update, whenever bridge attributes are changed, a notification is sent out to listeners. This includes changes triggered by sysfs, rtnl, ioctl, or user applications, such as NetworkManager . (BZ#950243)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_networking
Chapter 4. Certificate types and descriptions
Chapter 4. Certificate types and descriptions 4.1. User-provided certificates for the API server 4.1.1. Purpose The API server is accessible by clients external to the cluster at api.<cluster_name>.<base_domain> . You might want clients to access the API server at a different hostname or without the need to distribute the cluster-managed certificate authority (CA) certificates to the clients. The administrator must set a custom default certificate to be used by the API server when serving content. 4.1.2. Location The user-provided certificates must be provided in a kubernetes.io/tls type Secret in the openshift-config namespace. Update the API server cluster configuration, the apiserver/cluster resource, to enable the use of the user-provided certificate. 4.1.3. Management User-provided certificates are managed by the user. 4.1.4. Expiration API server client certificate expiration is less than five minutes. User-provided certificates are managed by the user. 4.1.5. Customization Update the secret containing the user-managed certificate as needed. Additional resources Adding API server certificates 4.2. Proxy certificates 4.2.1. Purpose Proxy certificates allow users to specify one or more custom certificate authority (CA) certificates used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Additional resources Configuring the cluster-wide proxy 4.2.2. Managing proxy certificates during installation The additionalTrustBundle value of the installer configuration is used to specify any proxy-trusted CA certificates during installation. For example: USD cat install-config.yaml Example output ... proxy: httpProxy: http://<username:[email protected]:123/> httpsProxy: http://<username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 4.2.3. Location The user-provided trust bundle is represented as a config map. The config map is mounted into the file system of platform components that make egress HTTPS calls. Typically, Operators mount the config map to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem , but this is not required by the proxy. A proxy can modify or inspect the HTTPS connection. In either case, the proxy must generate and sign a new certificate for the connection. Complete proxy support means connecting to the specified proxy and trusting any signatures it has generated. Therefore, it is necessary to let the user specify a trusted root, such that any certificate chain connected to that trusted root is also trusted. If you use the RHCOS trust bundle, place CA certificates in /etc/pki/ca-trust/source/anchors . For more information, see Using shared system certificates in the Red Hat Enterprise Linux (RHEL) Securing networks document. 4.2.4. Expiration The user sets the expiration term of the user-provided trust bundle. The default expiration term is defined by the CA certificate itself. It is up to the CA administrator to configure this for the certificate before it can be used by OpenShift Container Platform or RHCOS. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.2.5. Services By default, all platform components that make egress HTTPS calls will use the RHCOS trust bundle. If trustedCA is defined, it will also be used. Any service that is running on the RHCOS node is able to use the trust bundle of the node. 4.2.6. Management These certificates are managed by the system and not the user. 4.2.7. Customization Updating the user-provided trust bundle consists of either: updating the PEM-encoded certificates in the config map referenced by trustedCA, or creating a config map in the namespace openshift-config that contains the new trust bundle and updating trustedCA to reference the name of the new config map. The mechanism for writing CA certificates to the RHCOS trust bundle is exactly the same as writing any other file to RHCOS, which is done through the use of machine configs. When the Machine Config Operator (MCO) applies the new machine config that contains the new CA certificates, it runs the program update-ca-trust afterwards and restarts the CRI-O service on the RHCOS nodes. This update does not require a node reboot. Restarting the CRI-O service automatically updates the trust bundle with the new CA certificates. For example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt The trust store of machines must also support updating the trust store of nodes. 4.2.8. Renewal There are no Operators that can auto-renew certificates on the RHCOS nodes. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.3. Service CA certificates 4.3.1. Purpose service-ca is an Operator that creates a self-signed CA when an OpenShift Container Platform cluster is deployed. 4.3.2. Expiration A custom expiration term is not supported. The self-signed CA is stored in a secret with qualified name service-ca/signing-key in fields tls.crt (certificate(s)), tls.key (private key), and ca-bundle.crt (CA bundle). Other services can request a service serving certificate by annotating a service resource with service.beta.openshift.io/serving-cert-secret-name: <secret name> . In response, the Operator generates a new certificate, as tls.crt , and private key, as tls.key to the named secret. The certificate is valid for two years. Other services can request that the CA bundle for the service CA be injected into API service or config map resources by annotating with service.beta.openshift.io/inject-cabundle: true to support validating certificates generated from the service CA. In response, the Operator writes its current CA bundle to the CABundle field of an API service or as service-ca.crt to a config map. As of OpenShift Container Platform 4.3.5, automated rotation is supported and is backported to some 4.2.z and 4.3.z releases. For any release supporting automated rotation, the service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA. The service CA expiration of 26 months is longer than the expected upgrade interval for a supported OpenShift Container Platform cluster, such that non-control plane consumers of service CA certificates will be refreshed after CA rotation and prior to the expiration of the pre-rotation CA. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. 4.3.3. Management These certificates are managed by the system and not the user. 4.3.4. Services Services that use service CA certificates include: cluster-autoscaler-operator cluster-monitoring-operator cluster-authentication-operator cluster-image-registry-operator cluster-ingress-operator cluster-kube-apiserver-operator cluster-kube-controller-manager-operator cluster-kube-scheduler-operator cluster-networking-operator cluster-openshift-apiserver-operator cluster-openshift-controller-manager-operator cluster-samples-operator cluster-storage-operator machine-config-operator console-operator insights-operator machine-api-operator operator-lifecycle-manager CSI driver operators This is not a comprehensive list. Additional resources Manually rotate service serving certificates Securing service traffic using service serving certificate secrets 4.4. Node certificates 4.4.1. Purpose Node certificates are signed by the cluster and allow the kubelet to communicate with the Kubernetes API server. They come from the kubelet CA certificate, which is generated by the bootstrap process. 4.4.2. Location The kubelet CA certificate is located in the kube-apiserver-to-kubelet-signer secret in the openshift-kube-apiserver-operator namespace. 4.4.3. Management These certificates are managed by the system and not the user. 4.4.4. Expiration Node certificates are automatically rotated after 292 days and expire after 365 days. 4.4.5. Renewal The Kubernetes API Server Operator automatically generates a new kube-apiserver-to-kubelet-signer CA certificate at 292 days. The old CA certificate is removed after 365 days. Nodes are not rebooted when a kubelet CA certificate is renewed or removed. Cluster administrators can manually renew the kubelet CA certificate by running the following command: USD oc annotate -n openshift-kube-apiserver-operator secret kube-apiserver-to-kubelet-signer auth.openshift.io/certificate-not-after- Additional resources Working with nodes 4.5. Bootstrap certificates 4.5.1. Purpose The kubelet, in OpenShift Container Platform 4 and later, uses the bootstrap certificate located in /etc/kubernetes/kubeconfig to initially bootstrap. This is followed by the bootstrap initialization process and authorization of the kubelet to create a CSR . In that process, the kubelet generates a CSR while communicating over the bootstrap channel. The controller manager signs the CSR, resulting in a certificate that the kubelet manages. 4.5.2. Management These certificates are managed by the system and not the user. 4.5.3. Expiration This bootstrap certificate is valid for 10 years. The kubelet-managed certificate is valid for one year and rotates automatically at around the 80 percent mark of that one year. Note OpenShift Lifecycle Manager (OLM) does not update the bootstrap certificate. 4.5.4. Customization You cannot customize the bootstrap certificates. 4.6. etcd certificates 4.6.1. Purpose etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated by the bootstrap process. 4.6.2. Expiration The CA certificates are valid for 10 years. The peer, client, and server certificates are valid for three years. 4.6.3. Rotating the etcd certificate The etcd certificate automatically rotates using the etcd cluster Operator. However, if a certificate must be rotated before it is automatically rotated, you can manually rotate it. Procedure Make a backup copy of the current signer certificate by running the following command: USD oc get secret -n openshift-etcd etcd-signer -oyaml > signer_backup_secret.yaml Delete the existing signer certificate by running the following command: USD oc delete secret -n openshift-etcd etcd-signer Wait for the static pod roll out by running the following command. The static pod roll out can take a few minutes to complete. USD oc wait --for=condition=Progressing=False --timeout=15m clusteroperator/etcd 4.6.4. Removing an unused certificate authority from the bundle A manual rotation does not immediately update the trust bundle to remove the public key of a signer certificate. The public key of the signer certificate is removed at the expiration date, however if the public key must be removed before it expires, you can delete it. Procedure Delete the key by running the following command: USD oc delete configmap -n openshift-etcd etcd-ca-bundle Wait for the static pod rollout by running the following command. The bundle regenerates with the current signer certificate and all unknown or unused keys are deleted. USD oc adm wait-for-stable-cluster --minimum-stable-period 2m 4.6.5. etcd certificate rotation alerts and metrics signer certificates Two alerts inform users about pending etcd certificate expiration: etcdSignerCAExpirationWarning Occurs 730 days until the signer expires. etcdSignerCAExpirationCritical Occurs 365 days until the signer expires. These alerts track the expiration date of the signer certificate authorities in the openshift-etcd namespace. You can rotate the certificate for the following reasons: You receive an expiration alert. The private key is leaked. Important When a private key is leaked, you must rotate all of the certificates. There is an etcd signer for the OpenShift Container Platform metrics system. Substitute the following metrics parameters in Rotating the etcd certificate . etcd-metric-signer instead of etcd-signer etcd-metrics-ca-bundle instead of etcd-ca-bundle 4.6.6. Management These certificates are only managed by the system and are automatically rotated. 4.6.7. Services etcd certificates are used for encrypted communication between etcd member peers and encrypted client traffic. The following certificates are generated and used by etcd and other processes that communicate with etcd: Peer certificates: Used for communication between etcd members. Client certificates: Used for encrypted server-client communication. Client certificates are currently used by the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets ( etcd-client , etcd-metric-client , etcd-metric-signer , and etcd-signer ) are added to the openshift-config , openshift-monitoring , and openshift-kube-apiserver namespaces. Server certificates: Used by the etcd server for authenticating client requests. Metric certificates: All metric consumers connect to proxy with metric-client certificates. Additional resources Restoring to a cluster state 4.7. OLM certificates 4.7.1. Management All certificates for Operator Lifecycle Manager (OLM) components ( olm-operator , catalog-operator , packageserver , and marketplace-operator ) are managed by the system. When installing Operators that include webhooks or API services in their ClusterServiceVersion (CSV) object, OLM creates and rotates the certificates for these resources. Certificates for resources in the openshift-operator-lifecycle-manager namespace are managed by OLM. OLM will not update the certificates of Operators that it manages in proxy environments. These certificates must be managed by the user using the subscription config. steps Configuring proxy support in Operator Lifecycle Manager 4.7.2. Additional resources Proxy certificates Replacing the default ingress certificate Updating the CA bundle 4.8. Aggregated API client certificates 4.8.1. Purpose Aggregated API client certificates are used to authenticate the KubeAPIServer when connecting to the Aggregated API Servers. 4.8.2. Management These certificates are managed by the system and not the user. 4.8.3. Expiration This CA is valid for 30 days. The managed client certificates are valid for 30 days. CA and client certificates are rotated automatically through the use of controllers. 4.8.4. Customization You cannot customize the aggregated API server certificates. 4.9. Machine Config Operator certificates 4.9.1. Purpose This certificate authority is used to secure connections from nodes to Machine Config Server (MCS) during initial provisioning. There are two certificates: . A self-signed CA, the MCS CA . A derived certificate, the MCS cert 4.9.1.1. Provisioning details OpenShift Container Platform installations that use Red Hat Enterprise Linux CoreOS (RHCOS) are installed by using Ignition. This process is split into two parts: An Ignition config is created that references a URL for the full configuration served by the MCS. For user-provisioned infrastucture installation methods, the Ignition config manifests as a worker.ign file created by the openshift-install command. For installer-provisioned infrastructure installation methods that use the Machine API Operator, this configuration appears as the worker-user-data secret. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources Machine Config Operator . About the OVN-Kubernetes network plugin 4.9.1.2. Provisioning chain of trust The MCS CA is injected into the Ignition configuration under the security.tls.certificateAuthorities configuration field. The MCS then provides the complete configuration using the MCS cert presented by the web server. The client validates that the MCS cert presented by the server has a chain of trust to an authority it recognizes. In this case, the MCS CA is that authority, and it signs the MCS cert. This ensures that the client is accessing the correct server. The client in this case is Ignition running on a machine in the initramfs. 4.9.1.3. Key material inside a cluster The MCS CA appears in the cluster as a config map in the kube-system namespace, root-ca object, with ca.crt key. The private key is not stored in the cluster and is discarded after the installation completes. The MCS cert appears in the cluster as a secret in the openshift-machine-config-operator namespace and machine-config-server-tls object with the tls.crt and tls.key keys. 4.9.2. Management At this time, directly modifying either of these certificates is not supported. 4.9.3. Expiration The MCS CA is valid for 10 years. The issued serving certificates are valid for 10 years. 4.9.4. Customization You cannot customize the Machine Config Operator certificates. 4.10. User-provided certificates for default ingress 4.10.1. Purpose Applications are usually exposed at <route_name>.apps.<cluster_name>.<base_domain> . The <cluster_name> and <base_domain> come from the installation config file. <route_name> is the host field of the route, if specified, or the route name. For example, hello-openshift-default.apps.username.devcluster.openshift.com . hello-openshift is the name of the route and the route is in the default namespace. You might want clients to access the applications without the need to distribute the cluster-managed CA certificates to the clients. The administrator must set a custom default certificate when serving application content. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use operator-generated default certificates in production clusters. 4.10.2. Location The user-provided certificates must be provided in a tls type Secret resource in the openshift-ingress namespace. Update the IngressController CR in the openshift-ingress-operator namespace to enable the use of the user-provided certificate. For more information on this process, see Setting a custom default certificate . 4.10.3. Management User-provided certificates are managed by the user. 4.10.4. Expiration User-provided certificates are managed by the user. 4.10.5. Services Applications deployed on the cluster use user-provided certificates for default ingress. 4.10.6. Customization Update the secret containing the user-managed certificate as needed. Additional resources Replacing the default ingress certificate 4.11. Ingress certificates 4.11.1. Purpose The Ingress Operator uses certificates for: Securing access to metrics for Prometheus. Securing access to routes. 4.11.2. Location To secure access to Ingress Operator and Ingress Controller metrics, the Ingress Operator uses service serving certificates. The Operator requests a certificate from the service-ca controller for its own metrics, and the service-ca controller puts the certificate in a secret named metrics-tls in the openshift-ingress-operator namespace. Additionally, the Ingress Operator requests a certificate for each Ingress Controller, and the service-ca controller puts the certificate in a secret named router-metrics-certs-<name> , where <name> is the name of the Ingress Controller, in the openshift-ingress namespace. Each Ingress Controller has a default certificate that it uses for secured routes that do not specify their own certificates. Unless you specify a custom certificate, the Operator uses a self-signed certificate by default. The Operator uses its own self-signed signing certificate to sign any default certificate that it generates. The Operator generates this signing certificate and puts it in a secret named router-ca in the openshift-ingress-operator namespace. When the Operator generates a default certificate, it puts the default certificate in a secret named router-certs-<name> (where <name> is the name of the Ingress Controller) in the openshift-ingress namespace. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. 4.11.3. Workflow Figure 4.1. Custom certificate workflow Figure 4.2. Default certificate workflow An empty defaultCertificate field causes the Ingress Operator to use its self-signed CA to generate a serving certificate for the specified domain. The default CA certificate and key generated by the Ingress Operator. Used to sign Operator-generated default serving certificates. In the default workflow, the wildcard default serving certificate, created by the Ingress Operator and signed using the generated default CA certificate. In the custom workflow, this is the user-provided certificate. The router deployment. Uses the certificate in secrets/router-certs-default as its default front-end server certificate. In the default workflow, the contents of the wildcard default serving certificate (public and private parts) are copied here to enable OAuth integration. In the custom workflow, this is the user-provided certificate. The public (certificate) part of the default serving certificate. Replaces the configmaps/router-ca resource. The user updates the cluster proxy configuration with the CA certificate that signed the ingresscontroller serving certificate. This enables components like auth , console , and the registry to trust the serving certificate. The cluster-wide trusted CA bundle containing the combined Red Hat Enterprise Linux CoreOS (RHCOS) and user-provided CA bundles or an RHCOS-only bundle if a user bundle is not provided. The custom CA certificate bundle, which instructs other components (for example, auth and console ) to trust an ingresscontroller configured with a custom certificate. The trustedCA field is used to reference the user-provided CA bundle. The Cluster Network Operator injects the trusted CA bundle into the proxy-ca config map. OpenShift Container Platform 4.18 and newer use default-ingress-cert . 4.11.4. Expiration The expiration terms for the Ingress Operator's certificates are as follows: The expiration date for metrics certificates that the service-ca controller creates is two years after the date of creation. The expiration date for the Operator's signing certificate is two years after the date of creation. The expiration date for default certificates that the Operator generates is two years after the date of creation. You cannot specify custom expiration terms on certificates that the Ingress Operator or service-ca controller creates. You cannot specify expiration terms when installing OpenShift Container Platform for certificates that the Ingress Operator or service-ca controller creates. 4.11.5. Services Prometheus uses the certificates that secure metrics. The Ingress Operator uses its signing certificate to sign default certificates that it generates for Ingress Controllers for which you do not set custom default certificates. Cluster components that use secured routes may use the default Ingress Controller's default certificate. Ingress to the cluster via a secured route uses the default certificate of the Ingress Controller by which the route is accessed unless the route specifies its own certificate. 4.11.6. Management Ingress certificates are managed by the user. See Replacing the default ingress certificate for more information. 4.11.7. Renewal The service-ca controller automatically rotates the certificates that it issues. However, it is possible to use oc delete secret <secret> to manually rotate service serving certificates. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure. 4.12. Monitoring and OpenShift Logging Operator component certificates 4.12.1. Expiration Monitoring components secure their traffic with service CA certificates. These certificates are valid for 2 years and are replaced automatically on rotation of the service CA, which is every 13 months. If the certificate lives in the openshift-monitoring or openshift-logging namespace, it is system managed and rotated automatically. 4.12.2. Management These certificates are managed by the system and not the user. 4.13. Control plane certificates 4.13.1. Location Control plane certificates are included in these namespaces: openshift-config-managed openshift-kube-apiserver openshift-kube-apiserver-operator openshift-kube-controller-manager openshift-kube-controller-manager-operator openshift-kube-scheduler 4.13.2. Management Control plane certificates are managed by the system and rotated automatically. In the rare case that your control plane certificates have expired, see Recovering from expired control plane certificates .
[ "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "cat install-config.yaml", "proxy: httpProxy: http://<username:[email protected]:123/> httpsProxy: http://<username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt", "oc annotate -n openshift-kube-apiserver-operator secret kube-apiserver-to-kubelet-signer auth.openshift.io/certificate-not-after-", "oc get secret -n openshift-etcd etcd-signer -oyaml > signer_backup_secret.yaml", "oc delete secret -n openshift-etcd etcd-signer", "oc wait --for=condition=Progressing=False --timeout=15m clusteroperator/etcd", "oc delete configmap -n openshift-etcd etcd-ca-bundle", "oc adm wait-for-stable-cluster --minimum-stable-period 2m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/certificate-types-and-descriptions
Chapter 33. Managing role-based access controls using the IdM Web UI
Chapter 33. Managing role-based access controls using the IdM Web UI Learn more about role-based access control in Identity Management (IdM) and the following operations which are run in the web interface (Web UI): Managing permissions Managing privileges Managing roles 33.1. Role-based access control in IdM Role-based access control (RBAC) in IdM grants a very different kind of authority to users compared to self-service and delegation access controls. Role-based access control is composed of three parts: Permissions grant the right to perform a specific task such as adding or deleting users, modifying a group, and enabling read-access. Privileges combine permissions, for example all the permissions needed to add a new user. Roles grant a set of privileges to users, user groups, hosts or host groups. 33.1.1. Permissions in IdM Permissions are the lowest level unit of role-based access control, they define operations together with the LDAP entries to which those operations apply. Comparable to building blocks, permissions can be assigned to as many privileges as needed. One or more rights define what operations are allowed: write read search compare add delete all These operations apply to three basic targets : subtree : a domain name (DN); the subtree under this DN target filter : an LDAP filter target : DN with possible wildcards to specify entries Additionally, the following convenience options set the corresponding attribute(s): type : a type of object (user, group, etc); sets subtree and target filter memberof : members of a group; sets a target filter Note Setting the memberof attribute permission is not applied if the target LDAP entry does not contain any reference to group membership. targetgroup : grants access to modify a specific group (such as granting the rights to manage group membership); sets a target With IdM permissions, you can control which users have access to which objects and even which attributes of these objects. IdM enables you to allow or block individual attributes or change the entire visibility of a specific IdM function, such as users, groups, or sudo, to all anonymous users, all authenticated users, or just a certain group of privileged users. For example, the flexibility of this approach to permissions is useful for an administrator who wants to limit access of users or groups only to the specific sections these users or groups need to access and to make the other sections completely hidden to them. Note A permission cannot contain other permissions. 33.1.2. Default managed permissions Managed permissions are permissions that come by default with IdM. They behave like other permissions created by the user, with the following differences: You cannot delete them or modify their name, location, and target attributes. They have three sets of attributes: Default attributes, the user cannot modify them, as they are managed by IdM Included attributes, which are additional attributes added by the user Excluded attributes, which are attributes removed by the user A managed permission applies to all attributes that appear in the default and included attribute sets but not in the excluded set. Note While you cannot delete a managed permission, setting its bind type to permission and removing the managed permission from all privileges effectively disables it. Names of all managed permissions start with System: , for example System: Add Sudo rule or System: Modify Services . Earlier versions of IdM used a different scheme for default permissions. For example, the user could not delete them and was only able to assign them to privileges. Most of these default permissions have been turned into managed permissions, however, the following permissions still use the scheme: Add Automember Rebuild Membership Task Add Configuration Sub-Entries Add Replication Agreements Certificate Remove Hold Get Certificates status from the CA Read DNA Range Modify DNA Range Read PassSync Managers Configuration Modify PassSync Managers Configuration Read Replication Agreements Modify Replication Agreements Remove Replication Agreements Read LDBM Database Configuration Request Certificate Request Certificate ignoring CA ACLs Request Certificates from a different host Retrieve Certificates from the CA Revoke Certificate Write IPA Configuration Note If you attempt to modify a managed permission from the command line, the system does not allow you to change the attributes that you cannot modify, the command fails. If you attempt to modify a managed permission from the Web UI, the attributes that you cannot modify are disabled. 33.1.3. Privileges in IdM A privilege is a group of permissions applicable to a role. While a permission provides the rights to do a single operation, there are certain IdM tasks that require multiple permissions to succeed. Therefore, a privilege combines the different permissions required to perform a specific task. For example, setting up an account for a new IdM user requires the following permissions: Creating a new user entry Resetting a user password Adding the new user to the default IPA users group Combining these three low-level tasks into a higher level task in the form of a custom privilege named, for example, Add User makes it easier for a system administrator to manage roles. IdM already contains several default privileges. Apart from users and user groups, privileges are also assigned to hosts and host groups, as well as network services. This practice permits a fine-grained control of operations by a set of users on a set of hosts using specific network services. Note A privilege may not contain other privileges. 33.1.4. Roles in IdM A role is a list of privileges that users specified for the role possess. In effect, permissions grant the ability to perform given low-level tasks (such as creating a user entry and adding an entry to a group), privileges combine one or more of these permissions needed for a higher-level task (such as creating a new user in a given group). Roles gather privileges together as needed: for example, a User Administrator role would be able to add, modify, and delete users. Important Roles are used to classify permitted actions. They are not used as a tool to implement privilege separation or to protect from privilege escalation. Note Roles can not contain other roles. 33.1.5. Predefined roles in Identity Management Red Hat Enterprise Linux Identity Management provides the following range of pre-defined roles: Table 33.1. Predefined Roles in Identity Management Role Privilege Description Enrollment Administrator Host Enrollment Responsible for client, or host, enrollment helpdesk Modify Users and Reset passwords, Modify Group membership Responsible for performing simple user administration tasks IT Security Specialist Netgroups Administrators, HBAC Administrator, Sudo Administrator Responsible for managing security policy such as host-based access controls, sudo rules IT Specialist Host Administrators, Host Group Administrators, Service Administrators, Automount Administrators Responsible for managing hosts Security Architect Delegation Administrator, Replication Administrators, Write IPA Configuration, Password Policy Administrator Responsible for managing the Identity Management environment, creating trusts, creating replication agreements User Administrator User Administrators, Group Administrators, Stage User Administrators Responsible for creating users and groups 33.2. Managing permissions in the IdM Web UI Follow this procedure to manage permissions in Identity Management (IdM) using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure To add a new permission, open the Role-Based Access Control submenu in the IPA Server tab and select Permissions : The list of permissions opens: Click the Add button at the top of the list of the permissions: The Add Permission form opens. Specify the name of the new permission and define its properties accordingly: Select the appropriate Bind rule type: permission is the default permission type, granting access through privileges and roles all specifies that the permission applies to all authenticated users anonymous specifies that the permission applies to all users, including unauthenticated users Note It is not possible to add permissions with a non-default bind rule type to privileges. You also cannot set a permission that is already present in a privilege to a non-default bind rule type. Choose the rights to grant with this permission in Granted rights . Define the method to identify the target entries for the permission: Type specifies an entry type, such as user, host, or service. If you choose a value for the Type setting, a list of all possible attributes which will be accessible through this ACI for that entry type appears under Effective Attributes . Defining Type sets Subtree and Target DN to one of the predefined values. Subtree (required) specifies a subtree entry; every entry beneath this subtree entry is then targeted. Provide an existing subtree entry, as Subtree does not accept wildcards or non-existent domain names (DNs). For example: cn=automount,dc=example,dc=com Extra target filter uses an LDAP filter to identify which entries the permission applies to. The filter can be any valid LDAP filter, for example: (!(objectclass=posixgroup)) IdM automatically checks the validity of the given filter. If you enter an invalid filter, IdM warns you about this when you attempt to save the permission. Target DN specifies the domain name (DN) and accepts wildcards. For example: uid=*,cn=users,cn=accounts,dc=com Member of group sets the target filter to members of the given group. After you specify the filter settings and click Add , IdM validates the filter. If all the permission settings are correct, IdM will perform the search. If some of the permissions settings are incorrect, IdM will display a message informing you about which setting is set incorrectly. Note Setting the memberof attribute permission is not applied if the target LDAP entry does not contain any reference to group membership. Add attributes to the permission: If you set Type , choose the Effective attributes from the list of available ACI attributes. If you did not use Type , add the attributes manually by writing them into the Effective attributes field. Add a single attribute at a time; to add multiple attributes, click Add to add another input field. Important If you do not set any attributes for the permission, then the permissions includes all attributes by default. Finish adding the permissions with the Add buttons at the bottom of the form: Click the Add button to save the permission and go back to the list of permissions. Alternatively, you can save the permission and continue adding additional permissions in the same form by clicking the Add and Add another button The Add and Edit button enables you to save and continue editing the newly created permission. Optional: You can also edit the properties of an existing permission by clicking its name from the list of permissions to display the Permission settings page. Optional: If you need to remove an existing permission, click the Delete button once you ticked the check box to its name in the list, to display The Remove permissions dialog. Note Operations on default managed permissions are restricted: the attributes you cannot modify are disabled in the IdM Web UI and you cannot delete the managed permissions completely. However, you can effectively disable a managed permission that has a bind type set to permission, by removing the managed permission from all privileges. For example, to let those with the permission write the member attribute in the engineers group (so they can add or remove members): 33.3. Managing privileges in the IdM WebUI Follow this procedure to manage privileges in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Existing permissions. For details about permissions, see Managing permissions in the IdM Web UI . Procedure To add a new privilege, open the Role-Based Access Control submenu in the IPA Server tab and select Privileges : The list of privileges opens. Click the Add button at the top of the list of privileges: The Add Privilege form opens. Enter the name and a description of the privilege: Click the Add and Edit button to save the new privilege and continue to the privilege configuration page to add permissions. Edit the properties of privileges by clicking on the privileges name in the privileges list. The privileges configuration page opens. The Permissions tab displays a list of permissions included in the selected privilege. Click the Add button at the top of the list to add permissions to the privilege: Tick the check box to the name of each permission to add, and use the > button to move the permissions to the Prospective column: Confirm by clicking the Add button. Optional: If you need to remove permissions, click the Delete button after you ticked the check box to the relevant permission: the Remove privileges from permissions dialog opens. Optional: If you need to delete an existing privilege, click the Delete button after you ticked the check box to its name in the list: the Remove privileges dialog opens. 33.4. Managing roles in the IdM Web UI Follow this procedure to manage roles in Identity Management (IdM) using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Existing privileges. For details about privileges, see Managing privileges in the IdM Web UI . Procedure To add a new role, open the Role-Based Access Control submenu in the IPA Server tab and select Roles : The list of roles opens. Click the Add button at the top of the list of the role-based access control instructions. The Add Role form opens. Enter the role name and a description: Click the Add and Edit button to save the new role and go to the role configuration page to add privileges and users. Edit the properties of roles by clicking on the roles name in the role list. The roles configuration page opens. Add members using the Users , Users Groups , Hosts , Host Groups or Services tabs, by clicking the Add button on top of the relevant list(s). In the window that opens, select the members on the left and use the > button to move them to the Prospective column. At the top of the Privileges tab, click Add . Select the privileges on the left and use the > button to move them to the Prospective column. Click the Add button to save. Optional: If you need to remove privileges or members from a role, click the Delete button after you ticked the check box to the name of the entity you want to remove. A dialog opens. Optional: If you need to remove an existing role, click the Delete button after you ticked the check box to its name in the list, to display the Remove roles dialog.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-role-based-access-controls-using-the-idm-web-ui_managing-users-groups-hosts
Chapter 4. New features
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 9.5. 4.1. Installer and image creation Minimal RHEL installation now installs only the s390utils-core package In RHEL 8.4 and later, the s390utils-base package is split into an s390utils-core package and an auxiliary s390utils-base package. As a result, setting the RHEL installation to minimal-environment installs only the necessary s390utils-core package and not the auxiliary s390utils-base package. If you want to use the s390utils-base package with a minimal RHEL installation, you must manually install the package after completing the RHEL installation or explicitly install s390utils-base using a Kickstart file. Bugzilla:1932480 [1] 4.2. Security NSS rebased to 3.101 The NSS cryptographic toolkit packages have been rebased to upstream version 3.101, which provides many bug fixes and enhancements. The most notable changes are the following: DTLS 1.3 protocol is now supported (RFC 9147). PBMAC1 support has been added to PKCS#12 (RFC 9579). The X25519Kyber768Draft00 hybrid post-quantum key agreement has experimental support ( draft-tls-westerbaan-xyber768d00 ). lib::pkix is the default validator in RHEL 10. RSA certificates with keys shorter than 2048 bits stop working, in accordance with the system-wide cryptographic policy (breaking fix). Jira:RHEL-46840 [1] Libreswan accepts IPv6 SAN extensions Previously, IPsec connection failed when setting up certificate-based authentication with a certificate that contained a subjectAltName (SAN) extension with an IPv6 address. With this update, the pluto daemon has been modified to accept IPv6 SAN and IPv4. As a result, IPsec connection is now correctly established with IPv6 address embedded in the certificate as an ID. Jira:RHEL-32720 [1] Custom key sizes in ssh-keygen You can now configure the size of keys generated by the /usr/libexec/openssh/sshd-keygen script by setting environment variables SSH_RSA_BITS and SSH_ECDSA_BITS in the /etc/sysconfig/sshd environment file. Jira:RHEL-26454 [1] fips-mode-setup checks for use of Argon2 KDF in open LUKS volumes before enabling FIPS mode The fips-mode-setup system management command now detects key derivation functions (KDF) used in currently open LUKS volumes, and aborts if it detects usage of Argon2 KDF. This is because Argon2 KDF is not FIPS-compatible, so preventing its use helps ensure FIPS compliance. As a result, switching into FIPS mode on a system with open LUKS volumes that use Argon2 as a KDF is blocked until those volumes are closed or converted to a different KDF. Jira:RHEL-39026 New SELinux boolean to allow QEMU Guest Agent executing confined commands Previously, commands that were supposed to run in a confined context through the QEMU Guest Agent daemon program, such as mount , failed with an Access Vector Cache (AVC) denial. To be able to run these commands, the guest-agent must run in the virt_qemu_ga_unconfined_t domain. Therefore, this update adds the SELinux policy boolean virt_qemu_ga_run_unconfined that allows guest-agent to make the transition to virt_qemu_ga_unconfined_t for executables located in any of the following directories: /etc/qemu-ga/fsfreeze-hook.d/ /usr/libexec/qemu-ga/fsfreeze-hook.d/ /var/run/qemu-ga/fsfreeze-hook.d/ In addition, the necessary rules for transitions for the qemu-ga daemon have been added to the SELinux policy boolean. As a result, you can now run confined commands through the QEMU Guest Agent without AVC denials by enabling the virt_qemu_ga_run_unconfined boolean. Jira:RHEL-31211 OpenSSL rebased to 3.2.2 The OpenSSL packages have been rebased to upstream version 3.2.2. This update brings various enhancements and bug fixes, most notably the following: The openssl req command with the -extensions option no longer mishandles extensions when creating certificate signing requests (CSR). Previously, the command fetched, parsed, and checked the name of the configuration file section for consistency but the name was not used for adding extensions to the created CSR file. With this fix, the extension is added to the generated CSR. As a side effect of this change, if the section specifies an extension incompatible with its use in the CSR, the command might fail with an error such as error:11000080:X509 V3 routines:X509V3_EXT_nconf_int:error in extension:crypto/x509/v3_conf.c:48:section=server_cert, name=authorityKeyIdentifier, value=keyid, issuer:always . The default X.500 distinguished name (DN) formatting has been changed to use the UTF-8 formatter. This also causes the removal of space characters around the equal sign ( = ) that separates DN element types from their values. Certificate compression extension (RFC 8879) is now supported. The QUIC protocol can now be used on the client side as a Technology Preview. The Argon2d, Argon2i, and Argon2id key derivation functions (KDF) are supported. Brainpool curves have been added to the TLS 1.3 protocol (RFC 8734) but Brainpool curves remain disabled in all supported system-wide cryptographic policies. Jira:RHEL-26271 crypto-policies provide algorithm selection in Java The crypto-policies packages have been updated to extend its control to algorithm selection in Java. This is caused by the evolution of the Java cryptographic agility configuration and crypto-policies needing to catch up to provide a better mapping for a more consistent system-wide configuration. Specifically, the update has the following changes: DTLS 1.0 is now controlled by the protocol option, is disabled by default, and can be reenabled by using the protocol@java-tls = DTLS1.0+ scoped directive. The anon and NULL ciphersuites are now controlled by cipher@java-tls = NULL and disabled by default. The list of signature algorithms is now controlled by the sign@java-tls scoped directive and aligned to the system-wide defaults. The list of signature algorithms is now controlled by the sign option and aligned to the system-wide defaults. If necessary, you can re-enable the use of desired algorithms specifically with Java with a sign@java-tls = <algorithm1>+ <algorithm2>+ scoped directive. Elliptic curve (EC) keys smaller than 256 bits are disabled unconditionally to align with upstream guidance. As a result, the list of cryptographic algorithms allowed for use with Java by default better matches system-wide defaults. For information on interoperability see the /etc/crypto-policies/back-ends/java.config file and configure your active cryptographic policy accordingly. Jira:RHEL-45620 [1] The selinux-policy git repository for CentOS Stream 10 is now publicly accessible CentOS Stream contributors now can participate in the development of the SELinux policy by contributing to the c10s branch of the fedora-selinux/selinux-policy git repository. Jira:RHEL-22960 clevis rebased to version 20 The clevis packages have been upgraded to version 20. The most notable enhancements and fixes include the following: Increased security by fixing potential problems reported by static analyzer tools in the clevis luks command, udisks2 integration, and the Shamir's Secret Sharing (SSS) thresholding scheme. Password generation now uses the jose utility instead of pwmake . This ensures enough entropy for passwords generated during the Clevis binding step. Jira:RHEL-29282 ca-certificates provide trusted CA roots in the OpenSSL directory format This update populates the /etc/pki/ca-trust/extracted/pem/directory-hash/ directory with trusted CA root certificates. As a consequence, lookups and validations are faster when OpenSSL is configured to load certificates from this directory, for example, by setting the SSL_CERT_DIR environment variable to /etc/pki/ca-trust/extracted/pem/directory-hash/ . Jira:RHEL-21094 [1] The nbdkit service is confined by SELinux The nbdkit-selinux subpackage adds new rules to the SELinux policy, and as a result, nbdkit is confined in SELinux. Therefore, the systems that run nbdkit are more resilient against privilege escalation attacks. Jira:RHEL-5174 libreswan rebased to 4.15 The libreswan packages have been rebased to upstream version 4.15. This version provides substantial improvements over the version 4.9 that was provided in releases. Removed a dependency on libxz through libsystemd . In IKEv1, default proposals have been set to aes-sha1 for Encapsulating Security Payload (ESP) and sha1 for Authentication Header (AH). IKEv1 rejects ESP proposals that combine Authenticated Encryption with Associated Data (AEAD) and non-empty INTEG. IKEv1 rejects exchange when a connection has no proposals. IKEv1 has now a more limited default cryptosuite: Failures of the libcap-ng library are no longer unrecoverable. TFC padding is now set for AEAD algorithms in the pluto utility. Jira:RHEL-50006 [1] jose rebased to version 14 The jose package has been upgraded to upstream version 14. jose is a C-language implementation of the Javascript Object Signing and Encryption (JOSE) standards. The most important enhancements and fixes include the following: Improved bound checks for the len function for the oct JWK Type in OpenSSL. The protected JSON Web Encryption (JWE) headers no longer contain zip . jose avoids potential denial of service (DoS) attacks by using high decompression chunks. Jira:RHEL-38079 Four RHEL services removed from SELinux permissive mode The following SELinux domains for RHEL services have been removed from SELinux permissive mode: afterburn_t bootupd_t mptcpd_t rshim_t Previously, these services from packages recently added to RHEL 9 were temporarily set to SELinux permissive mode, which allows gathering information about additional denials while the rest of the system is in SELinux enforcing mode. This temporary setting has now been removed, and as a result, these services now run in SELinux enforcing mode. Jira:RHEL-22173 The bootupd service is SELinux confined The bootupd service supports updating the boot loader, and therefore needs to be confined. This update to the SELinux policy adds additional rules, and as a result, the bootupd service runs in the bootupd_t SELinux domain. Jira:RHEL-22172 4.3. RHEL for Edge Support available to file system customization for the simplified-installer and raw image types With this enhancement, now you can add file system customizations to a blueprint when building the following image types: simplified-installer edge-raw-image edge-ami edge-vsphere With some additional exceptions for OSTree systems, you can choose arbitrary directory names at the /root level of the file system, for example: /local , / mypartition , /USDPARTITION . In logical volumes, these changes are made on top of the LVM partitioning system. The following directories are supported: /var , /var/log , and /var/lib/containers on a separate logical volume. Jira:RHELDOCS-17515 [1] 4.4. Shells and command-line tools The default value for the DefaultLimitCore systemd configuration option is now set to unlimited:unlimited Previously, the default value for the DefaultLimitCore systemd configuration option was set to 0:infinity . As a result, all processes started by systemd had a soft process limit for core files set to 0 , and no core files were created by default. However, the process adjusted the limit as required. With this update, the default value for DefaultLimitCore is set to unlimited:unlimited . As a result, the core file size is not limited by default. The default size of the crash dumps in the /etc/systemd/coredump.conf systemd-coredump component configuration file is 1GiB . Note that you can gather crash dumps for sporadic crashes, but ensure that the use of disk space by crash dumps remains conservative. Note The crash dumps stored by systemd-coredump are removed after 14 days if not used. Jira:RHEL-15501 openCryptoki rebased to version 3.23.0 The openCryptoki packages are updated to version 3.23.0, which provides multiple bug fixes and enhancements. Notable changes include: EP11 : Added support for FIPS-session mode Various updates are available for protection against RSA timing attacks Jira:RHEL-23673 [1] librtas rebased to version 2.0.6 The librtas package is updated to version 2.0.6. With this update, you can use the lockdown-compatible ABI provided by the kernel. Jira:RHEL-10566 [1] 4.5. Infrastructure services The BIND 9.18 is now supported in RHEL BIND 9.18 has been added in RHEL 9.5 in the new bind9.18 package. The notable feature enhancements include the following: Added support for DNS over TLS (DoT) and DNS over HTTPS (DoH) in the named daemon Added support for both incoming and outgoing zone transfers over TLS Improved support for OpenSSL 3.0 interfaces New configuration options for tuning TCP and UDP send and receive buffers Various improvements to the dig utility Jira:RHEL-14898 [1] intel-lpmd package is now available Intel Low Power Model Daemon is a Linux daemon, which optimizes active idle power. It selects a set of most power efficient CPUs based on configuration file or CPU topology. Based on the system utilization and other information, it puts the system into Low Power Mode by activating the power efficient CPUs and disabling the rest. The system can be restored from Low Power Mode by activating all CPUs. It is supported on Intel CPUs featuring hybrid architecture such as Performance-cores and Efficient-cores, which includes Meteor Lake CPUs, and both desktop and mobile. intel-lpmd has the following advantages: Improved power efficiency: intel-lpmd intelligently distributes workloads between P-cores and E-cores. Longer battery life: intel-lpmd reduces power consumption during idle periods. The daemon is not enabled by default. To ensure it starts on boot, run the following command: .Enable the intel-lpmd service: Start the service: Note By default, you must enable intel-lpmd if you are required to meet certain product energy efficiency policies. Jira:RHELDOCS-18391 [1] 4.6. Networking NetworkManager now supports the leftsubnet parameter for IPsec VPNs With this update, NetworkManager supports the leftsubnet parameter to define the private subnet behind the local participant used to configure subnet-to-subnet scenarios in Internet Protocol Security (IPsec) VPNs. Jira:RHEL-26776 nmstate now supports the congestion window clamp ( cwnd ) option With this update, you can use the cwnd option of the nmstate utility to set a maximum limit on the TCP congestion window size. This way you can control the maximum amount of unacknowledged data expressed as a number of packets that can be in transit over the network at any given time. The following example YAML file sets the cwnd option: Jira:RHEL-19409 The NetworkManager-libreswan plugin supports the rightcert option You can use the rightcert option when configuring Libreswan connections through NetworkManager. With this option, you can authenticate the "right" side participant of the IPsec (Internet Protocol Security) connection using a certificate. Jira:RHEL-30370 The nmstate utility now supports the rightcert option You can use the rightcert option when configuring Libreswan connections through the nmstate utility. With this option, you can authenticate the "right" side participant of the IPsec (Internet Protocol Security) connection using the certificate. The following example YAML file sets the rightcert option: Jira:RHEL-28898 nmstate now supports the leftsubnet option You can define entire subnets for IPsec (Internet Protocol Security) connections when configuring Libreswan connections through the nmstate utility by using the leftsubnet option. This ensures secure communication between different network segments. The following example YAML file sets the leftsubnet option: Note that the IPsec technology requires a peer-to-peer configuration, including another server with appropriate IP addresses and IPsec settings. Jira:RHEL-26755 NetworkManager supports connecting to IPsec VPNs that use IPv6 addressing Previously, NetworkManager supported only IPv4 addressing when using the NetworkManager-libreswan plugin to connect to Internet Protocol Security (IPsec) VPN. With this update, you can connect to IPsec VPNs that use IPv6 addressing. Jira:RHEL-21875 You can use both firewalld and nftables services simultaneously The firewalld and nftables systemd services are available to use simultaneously. Previously, users could enable only one of these services at a time. With this enhancement, these systemd services no longer conflict with each other. Jira:RHEL-17002 [1] 4.7. Kernel Kernel version in RHEL 9.5 Red Hat Enterprise Linux 9.5 is distributed with the kernel version 5.14.0-503.11.1. The eBPF facility has been rebased to Linux kernel version 6.8 Notable changes and enhancements include: Support exceptions allowing asserting conditions in BPF programs that should never be true but are hard for the verifier to infer. Improved working with per-cpu objects such as support for local per-cpu kptr and support for allocating and storing per-cpu objects in maps. Support for BPF v4 CPU instructions for arm32 and s390x . Several new open-coded iterators for task, task_vma, css, and css_task. New kfunc that acquires the associated cgroup of a task within a specific cgroup v1 hierarchy. Support for BPF link_info for uprobe multi-link along with bpftool integration. Several improvements and bug fixes in the BPF verifier allowing more precise program verification and improving the BPF program developer experience. Verifier improvement which prevents the creation of infinite loops by combining tail calls and fentry/fexit programs. Change in BPF verifier logic to validate global subprograms lazily instead of unconditionally before the main program, so they can be guarded using BPF CO-RE techniques. Add the ability to pin the BPF timer to the current CPU. Support UID or GID options when mounting bpffs . Jira:RHEL-23644 [1] rteval now supports relative CPU lists for loads With this enhancement, the --loads-cpulist now accepts relative CPU lists as arguments. The syntax is the same for the default measurement CPU list when using the parameter --measurement-cpulist . Jira:RHEL-25206 [1] A support for 420xx devices is added to QAT With this update, QAT supports 420xx devices. It includes a new device driver that supports updates to the firmware loader and other capabilities. Compared to 4xxx devices, the 420xx devices now have more acceleration engines, 16 service engines, and 1 administrative engine, and support the wireless cipher algorithms ZUC and Snow 3G . Jira:RHEL-17715 [1] Introducing noswap option when mounting TMPFS filesystem TMPFS is an in-memory filesystem largely utilized for quickly sharing information across multiple processes. Starting with version 2.2, glibc expects a tmpfs filesystem to be mounted at dev/shm to support POSIX shared memory. This mount point is necessary for shm_open and shm_unlink subroutines to function correctly. TMPFS blocks can be swapped out when there is a memory shortage, which poses a problem for certain performance- or privacy-critical workloads. Passing the new noswap mount option when mounting a TMPFS filesystem disables swap for that particular mount point of TMPFS. Jira:RHEL-31975 [1] Kernel module is now updated to version 6.8 Kernel module is now updated to version 6.8, which includes the following features: Improved Hardware Support: Expanded compatibility for the latest processors, GPUs, and peripherals. Security Enhancements: Integration of critical security patches and mitigations to address recent vulnerabilities. Performance Optimizations: Enhanced scheduling, memory management, and I/O performance for improved workload efficiency. Jira:RHEL-28063 [1] Introducing rteval container for real-time performance testing The rteval container provides tools and methods for accurately measuring system latencies. With this feature, users can measure the real-time performance of their systems. It evaluates the configuration of the Linux kernel for optimal real-time performance to analyze performance based on specific application needs. Note that no specific tuning guidelines are provided in the RHEL 9.5 release, and support is limited to customers with a Real-Time subscription. Jira:RHELDOCS-19122 [1] NVMf-FC kdump is now supported on the IBM Power NVMf-FC kdump now supports the IBM Power system for running kexec-tools . This allows the capture of system memory dumps over a fiber channel network using the NVMe storage devices for high-speed and low-latency access to storage for crash dump data. Jira:RHEL-11471 [1] 4.8. Boot loader UEFI variable filesystem ( efivarfs ) now supports analyzing persistent EFI variable space With this update, you can now analyze the space used by persistent EFI variable storage on systems booted in UEFI mode. Using the utilities df and du , you can calculate the total space used by UEFI variables, such as EFI boot variables and the UEFI Secure Boot databases. This prevents space exhaustion and enables better management of UEFI-related configuration, including Secure Boot and boot order settings. Jira:RHELDOCS-19280 [1] 4.9. File systems and storage File system quotas for tmpfs file system are now supported With this update, system administrators can now implement file system quotas to limit the space or memory users can consume on a tmpfs file system, preventing memory exhaustion. Jira:RHEL-7768 [1] NVMe TP 8006 in-band authentication with NVMe/TCP is now supported NVMe TP 8006 in-band authentication for NVMe over Fabrics (NVMe-oF) was introduced in RHEL 9.2 as a Technology Preview, which is now fully supported. This feature provides DH-HMAC-CHAP in-band authentication protocol for NVMe-oF, which is defined in the NVMe Technical Proposal 8006. For details, see the dhchap-secret and dhchap-ctrl-secret option descriptions in the nvme-connect(1) man page. Jira:RHEL-61452 cryptsetup rebased to version 2.7 The cryptsetup package has been rebased to version 2.7. It contains improvements for the libcryptsetup package to support Linux Unified Key Setup (LUKS) encrypted devices in the kdump enabled systems. Jira:RHEL-32377 [1] Dax feature is now supported for Ext4 and XFS The direct access (dax) feature for the Ext4 and XFS file systems, previously available as a Technology Preview, is now fully supported. DAX enables an application to map persistent memory directly into its address space, enhancing performance. For more information, see Creating a file system DAX namespace on an NVDIMM . Jira:RHELDOCS-19196 [1] EROFS file system is now supported EROFS is a lightweight generic read-only file system suitable for various read-only use cases, such as embedded devices or containers. It provides deduplication and transparent compression as options for scenarios that require them. For more information, see the erofs documentation . Jira:RHELDOCS-18451 4.10. High availability and clusters New pcs status wait command The pcs command-line interface now provides a pcs status wait command. This command ensures that Pacemaker has completed any actions required by changes to the Cluster Information Base (CIB) and does not need to take any further actions in order to make the actual cluster state match the requested cluster state. Jira:RHEL-25854 pcs support for new commands to query the status of a resource in a cluster The pcs command-line interface now provides pcs status query resource commands to query various attributes of a single resource in a cluster. These commands query: the existence of the resource the type of the resource the state of the resource various information about the members of a collective resource on which nodes the resource is running You can use these commands for pcs-based scripting since there is no need to parse plain text outputs. Jira:RHEL-21051 New pcs resource defaults and pcs resource op defaults option for displaying configuration in text, JSON, and command formats The pcs resource defaults and pcs resource op defaults commands and their aliases pcs stonith defaults and pcs stonith op defaults now provide the --output-format option. Specifying --output-format=text displays the configured resource defaults or operation defaults in plain text format, which is the default value for this option. Specifying --output-format=cmd displays the pcs resource defaults or pcs resource op defaults commands created from the current cluster defaults configuration. You can use these commands to re-create configured resource defaults or resource operation defaults on a different system. Specifying --output-format=json displays the configured resource defaults or resource operation defaults in JSON format, which is suitable for machine parsing. Jira:RHEL-16231 New Pacemaker option to leave a panicked node shut down without rebooting automatically You can now set the PCMK_panic_action variable in the /etc/sysconfig/pacemaker configuration file to off or sync-off . When you set this variable to off or sync-off , a node remains shut down after a panic condition instead of rebooting automatically. Jira:RHEL-39057 Support for new pcsd Web UI features The pcsd Web UI now supports the following features: When you set the placement-strategy cluster property to default , the pcsd Web UI displays a warning near the utilization attributes for nodes and resources. This warning notes that the utilization has no effect due to placement-strategy configuration. The pscd Web UI supports dark mode, which you can set through the user menu in the masthead. Jira:RHEL-21895 , Jira:RHEL-7726 4.11. Dynamic programming languages, web and database servers Increased performance of the Python interpreter All supported versions of Python in RHEL 9 are now compiled with GCC's -O3 optimization flag, which is the default in upstream. As a result, you can observe increased performance of your Python applications and the interpreter itself. Jira:RHEL-49615 [1] , Jira:RHEL-49635, Jira:RHEL-49637 httpd rebased to 2.4.62 The httpd package has been updated to version 2.4.62 that includes various bug fixes, security fixes, and new features. Notable feature include : The following directives have been added: CGIScriptTimeout directive is added in the mod_cgi module . AliasPreservePath directive in the mod_alias module to map the full path after alias in a location. RedirectRelative directive in mod_alias to allow relative redirect targets to be issued as-is. DeflateAlterETag directive in the mod_deflate module to control the modification of ETag . The NoChange parameter mimics 2.2.x behavior. An optional third argument for the ProxyRemote server is added in the mod_proxy module which configures basic authentication credentials to pass to the remote proxy. LDAPConnectionPoolTTL directive now accepts negative values to allow reusing the connections of any age. Previously, an error was encountered in the mod_ldap module when you parsed the configuration file with a negative value. You can now use the -T option to allow truncating the subsequent rotated log files without the initial log file being truncated in the rotatelogs binary. Jira:RHEL-14668 mod_md rebased to version 2.4.26 The mod_md module has been updated to version 2.4.26. Notable changes over the version include: The following directives have been added: MDCheckInterval to control the number of server checks for detected revocations. MDMatchNames all|servernames to allow more control over how the MDomains are matched to the VirtualHosts. MDChallengeDns01Version . When you set the value of this directive to 2 , it provides the command with the challenge value on the teardown invocation. By default, in version 1, only the setup invocation gets this parameter. For Managed Domain in manual mode , the mod_md_verification module now checks if all used ServerName and ServerAlias reports a warning instead of an error (AH10040). You can now configure the MDChallengeDns01 directive for individual domains. Jira:RHEL-25075 [1] PostgreSQL 16 now provides the pgvector extension The postgresql:16 module stream is now distributed with the pgvector extension. With the pgvector extension, you can store and query high-dimensional vector embeddings directly within PostgreSQL databases and perform a vector similarity search. Vector embeddings are numerical representations of data that are often used in machine learning and AI applications to capture the semantic meaning of text, images, or other data types. Jira:RHEL-34669 A new db_converter tool to convert a libdb database to the GDBM format The deprecated Berkeley DB ( libdb ) now provides the db_converter tool for converting a lidbd database to the GNU dbm (GDBM) database format. The db_converter tool is distributed in the libdb-utils subpackage. For more information about alternatives to libdb , see the Red Hat Knowledgebase article Available replacements for the deprecated Berkeley DB (libdb) in RHEL . Jira:RHEL-35607 A new nodejs:22 module stream is fully supported A new module stream, nodejs:22 , previously available as a Technology Preview, is fully supported with the release of the RHEA-2024:11235 advisory. The nodejs:22 module stream now provides Node.js 22.11 , which is a Long Term Support (LTS) version. Node.js 22 included in RHEL 9.5 provides numerous new features, bug fixes, security fixes, and performance improvements over Node.js 20 available since RHEL 9.3. Notable changes include: The V8 JavaScript engine has been upgraded to version 12.4. The V8 Maglev compiler is now enabled by default on architectures where it is available (AMD and Intel 64-bit architectures and the 64-bit ARM architecture). Maglev improves performance for short-lived CLI programs. The npm package manager has been upgraded to version 10.8.1. The node --watch mode is now considered stable. In watch mode, changes in watched files cause the Node.js process to restart. The browser-compatible implementation of WebSocket is now considered stable and enabled by default. As a result, a WebSocket client to Node.js is available without external dependencies. Node.js now includes an experimental feature for execution of scripts from package.json . To use this feature, execute the node --run <script-in-package.json> command. To install the nodejs:22 module stream, use: If you want to upgrade from the nodejs:20 stream, see Switching to a later stream . For information about the length of support for the nodejs Application Streams, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-67327 4.12. Compilers and development tools System GCC rebased to version 11.5 The system version of GCC in RHEL 9 has been updated to version 11.5. This update provides numerous bug fixes. Jira:RHEL-35635 A new tunable for glibc is available to improve performance by placing dynamic objects closer together Previously, the dynamic loader of glibc placed dynamic objects randomly throughout the available address space to enhance security. Consequently, objects were often too far apart, which led to inefficient calls between them. With this update, you can now place objects closer together, specifically, in the first 2 GB of address space, by setting the following tunable: Setting this tunable might result in improved performance for some applications at the expense of a small reduction in address space layout randomization (ASLR). Jira:RHEL-20172 [1] glibc now supports dynamic linking of Intel APX-enabled functions An incompatible dynamic linker trampoline was identified as a potential source of incompatibilities for Intel Advanced Performance Extensions (APX) applications. As a workaround, it was possible to use the BIND_NOW executable or use only the standard calling convention. With this update, the dynamic linker of glibc preserves APX-related registers. Note Because of this change, additional space is needed beyond the top of the stack. Users who strictly limit this space might need to adjust or evaluate the stack limits. Jira:RHEL-25046 [1] Optimization of AMD Zen 3 and Zen 4 performance in glibc Previously, AMD Zen 3 and Zen 4 processors sometimes used the Enhanced Repeat Move String (ERMS) version of the memcpy and memmove library routines regardless of the most optimal choice. With this update to glibc , AMD Zen 3 and Zen 4 processors use the most optimal versions of memcpy and memmove . Jira:RHEL-25531 [1] System version of GDB rebased to version 14.2 and GDB removed from GCC Toolset GDB has been updated to version 14.2. Starting with RHEL 9.5, GDB is transitioning into a rolling Application Stream with its system version rebased in minor releases of RHEL. Therefore, GDB is not included in GCC Toolset 14 in RHEL 9. The following paragraphs list notable changes in GDB 14.2 since GDB 12.1. General: The info breakpoints command now displays enabled breakpoint locations of disabled breakpoints as in the y- state. Added support for debug sections compressed with Zstandard ( ELFCOMPRESS_ZSTD ) for ELF. The Text User Interface (TUI) no longer styles the source and assembly code highlighted by the current position indicator by default. To re-enable styling, use the new command set style tui-current-position . A new USD_inferior_thread_count convenience variable contains the number of live threads in the current inferior. For breakpoints with multiple code locations, GDB now prints the code location using the <breakpoint_number>.<location_number> syntax. When a breakpoint is hit, GDB now sets the USD_hit_bpnum and USD_hit_locno convenience variables to the hit breakpoint number and code location number. You can now disable the last hit breakpoint by using the disable USD_hit_bpnum command, or disable only the specific breakpoint code location by using the disable USD_hit_bpnum.USD_hit_locno command. Added support for the NO_COLOR environment variable. Added support for integer types larger than 64 bits. You can use new commands for multi-target feature configuration to configure remote target feature sets (see the set remote <name>-packet and show remote <name>-packet in Commands). Added support for the Debugger Adapter Protocol. You can now use the new inferior keyword to make breakpoints inferior-specific (see break or watch in Commands). You can now use the new USD_shell() convenience function to run a shell command during expression evaluation. Changes to existing commands: break , watch Using the thread or task keywords multiple times with the break and watch commands now results in an error instead of using the thread or task ID of the last instance of the keyword. Using more than one of the thread , task , and inferior keywords in the same break or watch command is now invalid. printf , dprintf The printf and dprintf commands now accept the %V output format, which formats an expression the same way as the print command. You can also modify the output format by using additional print options in brackets [... ] following the command, for example: printf "%V[-array-indexes on]", <array> . list You can now use the . argument to print the location around the point of execution in the current frame, or around the beginning of the main() function if the inferior has not started yet. Attempting to list more source lines in a file than are available now issues a warning, referring the user to the . argument. document user-defined It is now possible to document user-defined aliases. New commands: set print nibbles [on|off] (default: off ), show print nibbles - controls whether the print/t command displays binary values in groups of four bits (nibbles). set debug infcall [on|off] (default: off ), show debug infcall - prints additional debug messages about inferior function calls. set debug solib [on|off] (default: off ), show debug solib - prints additional debug messages about shared library handling. set print characters <LIMIT> , show print characters , print -characters <LIMIT> - controls how many characters of a string are printed. set debug breakpoint [on|off] (default: off ), show debug breakpoint - prints additional debug messages about breakpoint insertion and removal. maintenance print record-instruction [ N ] - prints the recorded information for a given instruction. maintenance info frame-unwinders - lists the frame unwinders currently in effect in the order of priority (highest first). maintenance wait-for-index-cache - waits until all pending writes to the index cache are completed. info main - prints information on the main symbol to identify an entry point into the program. set tui mouse-events [on|off] (default: on ), show tui mouse-events - controls whether mouse click events are sent to the TUI and Python extensions (when on ), or the terminal (when off ). Machine Interface (MI) changes: MI version 1 has been removed. MI now reports no-history when reverse execution history is exhausted. The thread and task breakpoint fields are no longer reported twice in the output of the -break-insert command. Thread-specific breakpoints can no longer be created on non-existent thread IDs. The --simple-values argument to the -stack-list-arguments , -stack-list-locals , -stack-list-variables , and -var-list-children commands now considers reference types as simple if the target is simple. The -break-insert command now accepts a new -g thread-group-id option to create inferior-specific breakpoints. Breakpoint-created notifications and the output of the -break-insert command can now include an optional inferior field for the main breakpoint and each breakpoint location. The asynchronous record stating the breakpoint-hit stopped reason now contains an optional field locno giving the code location number in case of a multi-location breakpoint. Changes in the GDB Python API: Events A new gdb.ThreadExitedEvent event. A new gdb.executable_changed event registry, which emits the ExecutableChangedEvent objects that have progspace and reload attributes. New gdb.events.new_progspace and gdb.events.free_progspace event registries, which emit the NewProgpspaceEvent and FreeProgspaceEvent event types. Both of these event types have a single attribute progspace to specify the gdb.Progspace program space that is being added to or removed from GDB. The gdb.unwinder.Unwinder class The name attribute is now read-only. The name argument of the __init__ function must be of the str type, otherwise a TypeError is raised. The enabled attribute now accepts only the bool type. The gdb.PendingFrame class New methods: name , is_valid , pc , language , find_sal , block , and function , which mirror similar methods of the gdb.Frame class. The frame-id argument of the create_unwind_info function can now be either an integer or a gdb.Value object for the pc , sp , and special attributes. A new gdb.unwinder.FrameId class, which can be passed to the gdb.PendingFrame.create_unwind_info function. The gdb.disassembler.DisassemblerResult class can no longer be sub-classed. The gdb.disassembler module now includes styling support. A new gdb.execute_mi(COMMAND, [ARG]... ) function, which invokes a GDB/MI command and returns result as a Python dictionary. A new gdb.block_signals() function, which returns a context manager that blocks any signals that GDB needs to handle. A new gdb.Thread subclass of the threading.Thread class, which calls the gdb.block_signals function in its start method. The gdb.parse_and_eval function has a new global_context parameter to restrict parsing on global symbols. The gdb.Inferior class A new arguments attribute, which holds the command-line arguments to the inferior, if known. A new main_name attribute, which holds the name of the inferior's main function, if known. New clear_env , set_env , and unset_env methods, which can modify the inferior's environment before it is started. The gdb.Value class A new assign method to assign a value of an object. A new to_array method to convert an array-like value to an array. The gdb.Progspace class A new objfile_for_address method, which returns the gdb.Objfile object that covers a given address (if exists). A new symbol_file attribute holding the gdb.Objfile object that corresponds to the Progspace.filename variable (or None if the filename is None ). A new executable_filename attribute, which holds the string with a filename that is set by the exec-file or file commands, or None if no executable file is set. The gdb.Breakpoint class A new inferior attribute, which contains the inferior ID (an integer) for breakpoints that are inferior-specific, or None if no such breakpoints are set. The gdb.Type class New is_array_like and is_string_like methods, which reflect whether a type might be array- or string-like regardless of the type's actual type code. A new gdb.ValuePrinter class, which can be used as the base class for the result of applying a pretty-printer. A newly implemented gdb.LazyString.__str__ method. The gdb.Frame class A new static_link method, which returns the outer frame of a nested function frame. A new gdb.Frame.language method that returns the name of the frame's language. The gdb.Command class GDB now reformats the doc string for the gdb.Command class and the gdb.Parameter sub-classes to remove unnecessary leading whitespace from each line before using the string as the help output. The gdb.Objfile class A new is_file attribute. A new gdb.format_address(ADDRESS, PROGSPACE, ARCHITECTURE) function, which uses the same format as when printing address, symbol, and offset information from the disassembler. A new gdb.current_language function, which returns the name of the current language. A new Python API for wrapping GDB's disassembler, including gdb.disassembler.register_disassembler(DISASSEMBLER, ARCH) , gdb.disassembler.Disassembler , gdb.disassembler.DisassembleInfo , gdb.disassembler.builtin_disassemble(INFO, MEMORY_SOURCE) , and gdb.disassembler.DisassemblerResult . A new gdb.print_options function, which returns a dictionary of the prevailing print options, in the form accepted by the gdb.Value.format_string function. The gdb.Value.format_string function gdb.Value.format_string now uses the format provided by the print command if it is called during a print or other similar operation. gdb.Value.format_string now accepts the summary keyword. A new gdb.BreakpointLocation Python type. The gdb.register_window_type method now restricts the set of acceptable window names. Architecture-specific changes: AMD and Intel 64-bit architectures Added support for disassembler styling using the libopcodes library, which is now used by default. You can modify how the disassembler output is styled by using the set style disassembler * commands. To use the Python Pygments styling instead, use the new maintenance set libopcodes-styling off command. The 64-bit ARM architecture Added support for dumping memory tag data for the Memory Tagging Extension (MTE). Added support for the Scalable Matrix Extension 1 and 2 (SME/SME2). Some features are still considered experimental or alpha, for example, manual function calls with ZA state or tracking Scalable Vector Graphics (SVG) changes based on DWARF. Added support for Thread Local Storage (TLS) variables. Added support for hardware watchpoints. The 64-bit IBM Z architecture Record and replay support for the new arch14 instructions on IBM Z targets, except for the specialized-function-assist instruction NNPA . IBM Power Systems, Little Endian Added base enablement support for POWER11. For more details about rolling Application Streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-36211 , Jira:RHEL-10550, Jira:RHEL-39555 elfutils rebased to version 0.191 The elfutils package has been updated to version 0.191. Notable improvements include: Changes in the libdw library: The dwarf_addrdie function now supports binaries lacking a debug_aranges section. Support for DWARF package files has been improved. A new dwarf_cu_dwp_section_info function has been added. Caching eviction logic in the debuginfod server has been enhanced to improve retention of small, frequent, or slow files, such as vdso.debug . The eu-srcfiles utility can now fetch the source files of a DWARF/ELF file and place them into a zip archive. Jira:RHEL-29194 SystemTap rebased to version 5.1 The SystemTap tracing and probing tool has been updated to version 5.1. Notable changes include: An experimental --build-as=USER flag to reduce privileges during script compilation. Improved support for probing processes running in containers, identified by host PID. New probes for userspace hardware breakpoints and watchpoints. Support for the --remote operation of --runtime=bpf mode. Improved robustness of kernel-user transport. Jira:RHEL-29528 valgrind rebased to version 3.23.0 The Valgrind suite has been updated to version 3.23.0. Notable enhancements include: The --track-fds=yes option now warns against double closing of file descriptors, generates suppressible errors, and supports XML output. The --show-error-list=no|yes option now accepts a new value, all , to also print the suppressed errors. On the 64-bit IBM Z architecture, Valgrind now supports neural network processing assist (NNPA) facility vector instructions: VCNF , VCLFNH , VCFN , VCLFNL , VCRNF , and NNPA (z16/arch14). On the 64-bit ARM architecture, Valgrind now supports dotprod instructions ( sdot/udot ). On the AMD and Intel 64-bit architectures, Valgrind now provides more accurate instruction support for the x86_64-v3 microarchitecture. Valgrind now provides wrappers for the wcpncpy , memccpy , strlcat , and strlcpy functions that can detect memory overlap. Valgrind now supports the following Linux syscalls: mlock2 , fchmodat2 , and pidfd_getfd . Jira:RHEL-29534 , Jira:RHEL-10551 libabigail rebased to version 2.5 The libabigail library has been updated to version 2.5. Notable changes include: Improved suppression specification for strict conversions of flexible array data members. Added support for pointer-to-member types in C++ binaries. Improved weak mode of the abicompat tool. A new abidb tool to manage the ABI of operating systems. Numerous bug fixes. Jira:RHEL-30013 , Jira:RHEL-7325, Jira:RHEL-7332 New GCC Toolset 14 GCC Toolset 14 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The following tools and versions are provided by GCC Toolset 14: GCC 14.2 binutils 2.41 annobin 12.70 dwz 0.14 Note that the system version of GDB has been rebased and GDB is no longer included in GCC Toolset . To install GCC Toolset 14, enter the following command as root: To run a tool from GCC Toolset 14: To run a shell session where tool versions from GCC Toolset 14 override system versions of these tools: GCC Toolset 14 components are also available in the gcc-toolset-14-toolchain container image. For more information, see GCC Toolset 14 and Using GCC Toolset . Jira:RHEL-29758 [1] , Jira:RHEL-29852 GCC Toolset 14: GCC rebased to version 14.2 In GCC Toolset 14, the GNU Compiler Collection (GCC) has been updated to version 14.2. Notable changes include: Optimization and diagnostic improvements A new -fhardened umbrella option, which enables a set of hardening flags A new -fharden-control-flow-redundancy option to detect attacks that transfer control into the middle of functions A new strub type attribute to control stack scrubbing properties of functions and variables A new -finline-stringops option to force inline expansion of certain mem* functions Support for new OpenMP 5.1, 5.2, and 6.0 features Several new C23 features Multiple new C++23 and C++26 features Several resolved C++ defect reports New and improved experimental support for C++20, C++23, and C++26 in the C++ library Support for new CPUs in the 64-bit ARM architecture Multiple new instruction set architecture (ISA) extensions in the 64-bit Intel architecture, for example: AVX10.1, AVX-VNNI-INT16, SHA512, and SM4 New warnings in the GCC's static analyzer Certain warnings changed to errors; for details, see Porting to GCC 14 Various bug fixes For more information about changes in GCC 14, see the upstream GCC release notes . Jira:RHEL-29853 [1] GCC Toolset 14: annobin rebased to version 12.70 In GCC Toolset 14, annobin has been updated to version 12.70. The updated set of the annobin tools for testing binaries provides various bug fixes, introduces new tests, and updates the tools to build and work with newer versions of the GCC, Clang, LLVM, and Go compilers. With the enhanced tools, you can detect new issues in programs that are built in a non-standard way. Jira:RHEL-29850 [1] GCC Toolset 14: binutils rebased to version 2.41 RHEL 9.5 is distributed with GCC Toolset 14 binutils version 2.41. New features include: binutils tools support architecture extensions in the 64-bit Intel and ARM architectures. The linker now accepts the --remap-inputs <PATTERN>=<FILE> command-line option to replace any input file that matches <PATTERN> with <FILE> . In addition, you can use the --remap-inputs-file=<FILE> option to specify a file containing any number of these remapping directives. For ELF targets, you can use the linker command-line option --print-map-locals to include local symbols in a linker map. For most ELF-based targets, you can use the --enable-linker-version option to insert the version of the linker as a string into the .comment section. The linker script syntax has a new command for output sections, ASCIZ "<string>" , which inserts a zero-terminated string at the current location. You can use the new -z nosectionheader linker command-line option to omit ELF section header. Jira:RHEL-29851 [1] GCC Toolset 13: GCC supports AMD Zen 5 The GCC Toolset 13 version of GCC adds support for the AMD Zen 5 processor microarchitecture. To enable the support, use the -march=znver5 command-line option. Jira:RHEL-36523 [1] LLVM Toolset updated to 18.1.8 LLVM Toolset has been updated to version 18.1.8. Notable LLVM updates: The constant expression variants of the following instructions have been removed: and , or , lshr , ashr , zext , sext , fptrunc , fpext , fptoui , fptosi , uitofp , sitofp . The llvm.exp10 intrinsic has been added. The code_model attribute for global variables has been added. The backend for the AArch64, AMDGPU, PowerPC, RISC-V, SystemZ and x86 architectures has been improved. LLVM tools have been improved. Notable Clang enhancements: C++20 feature support: Clang no longer performs One Definition Rule (ODR) checks for declarations in the global module fragment. To enable more strict behavior, use the -Xclang -fno-skip-odr-check-in-gmf option. C++23 feature support: A new diagnostic flag -Wc++23-lambda-attributes has been added to warn about the use of attributes on lambdas. C++2c feature support: Clang now allows using the _ character as a placeholder variable name multiple times in the same scope. Attributes now expect unevaluated strings in attribute parameters that are string literals. The deprecated arithmetic conversion on enumerations from C++26 has been removed. The specification of template parameter initialization has been improved. For a complete list of changes, see the upstream release notes for Clang . ABI changes in Clang: Following the SystemV ABI for x86_64, the __int128 arguments are no longer split between a register and a stack slot. For more information, see the list of ABI changes in Clang . Notable backwards incompatible changes: A bug fix in the reversed argument order for templated operators breaks code in C++20 that was previously accepted in C++17. The GCC_INSTALL_PREFIX CMake variable (which sets the default --gcc-toolchain= ) is deprecated and will be removed. Specify the --gcc-install-dir= or --gcc-triple= option in a configuration file instead. The default extension name for precompiled headers (PCH) generation ( -c -xc-header and -c -xc++-header ) is now .pch instead of .gch . When -include a.h probes the a.h.gch file, the include now ignores a.h.gch if it is not a Clang PCH file or a directory containing any Clang PCH file. A bug that caused __has_cpp_attribute and __has_c_attribute to return incorrect values for certain C++-11-style attributes has been fixed. A bug in finding a matching operator!= while adding a reversed operator== has been fixed. The name mangling rules for function templates have been changed to accept that functions can be overloaded on their template parameter lists or requires-clauses. The -Wenum-constexpr-conversion warning is now enabled by default on system headers and macros. It will be turned into a hard (non-downgradable) error in the Clang release. A path to the imported modules for C++20 named modules can no longer be hard-coded. You must specify all the dependent modules from the command line. It is no longer possible to import modules by using import <module> ; Clang uses explicitly-built modules. For more details, see the list of potentially breaking changes . For more information, see the LLVM release notes and Clang release notes . LLVM Toolset is a rolling Application Stream, and only the latest version is supported. For more information, see the Red Hat Enterprise Linux Application Streams Life Cycle document. Jira:RHEL-28687 Rust Toolset rebased to version 1.79.0 Rust Toolset has been updated to version 1.79.0. Notable enhancements since the previously available version 1.75.0 include: A new offset_of! macro Support for C-string literals Support for inline const expressions Support for bounds in associated type position Improved automatic temporary lifetime extension Debug assertions for unsafe preconditions Rust Toolset is a rolling Application Stream, and only the latest version is supported. For more information, see the Red Hat Enterprise Linux Application Streams Life Cycle document. Jira:RHEL-30070 Go Toolset rebased to version 1.22 Go Toolset has been updated to version 1.22. Notable enhancements include: Variables in for loops are now created per iteration, preventing accidental sharing bugs. Additionally, for loops can now range over integers. Commands in workspaces can now use a vendor directory for the dependencies of the workspace. The go get command no longer supports the legacy GOPATH mode. This change does not affect the go build and go test commands. The vet tool has been updated to match the new behavior of the for loops. CPU performance has been improved by keeping type-based garbage collection metadata nearer to each heap object. Go now provides improved inlining optimizations and better profile-guided optimization support for higher performance. A new math/rand/v2 package is available. Go now provides enhanced HTTP routing patterns with support for methods and wildcards. For more information, see the Go upstream release notes. Go Toolset is a rolling Application Stream, and only the latest version is supported. For more information, see the Red Hat Enterprise Linux Application Streams Life Cycle document. Jira:RHEL-29527 [1] PCP rebased to version 6.2.2 Performance Co-Pilot (PCP) has been updated to version 6.2.2. Notable changes over the previously available version 6.2.0 include: New tools and agents pcp2openmetrics : a new tool to push PCP metrics in Open Metrics format to remote end points pcp-geolocate : a new tool to report latitude and longitude metric labels pmcheck : a new tool to interrogate and control PCP components pmdauwsgi : a new PCP agent that exports instrumentation from uWSGI servers Enhanced tools pmdalinux : added new kernel metrics (hugepages, filesystems, TCP, softnet, virtual machine balloon) pmdalibvirt : added support for metric labels, added new balloon, vCPU, and domain info metrics pmdabpf : improved eBPF networking metrics for use with the pcp-atop utility Jira:RHEL-30198 Grafana rebased to version 10.2.6 The Grafana platform has been updated to version 10.2.6. Notable enhancements include: Support for zooming in on the y axis of time series and candlestick visualizations by holding shift while clicking and dragging. Streamlined data source selection when creating a dashboard. Updated User Interface, including updates to navigation and the command palette. Various improvements to transformations, including the new unary operation mode for the Add field from calculation transformation. Various improvements to dashboards and data visualizations, including a redesigned empty dashboard and dashboard panel. New geomap and canvas panels. Other changes: Various improvements to users, access, authentication, authorization, and security. Alerting improvements along with new alerting features. Public dashboards now available. For a complete list of changes since the previously available Grafana version 9.2, see the upstream documentation . Jira:RHEL-31246 [1] Red Hat build of OpenJDK 17 is now the default Java implementation in RHEL 9 The default RHEL 9 Java implementation is being changed from OpenJDK 11, which has reached its End Of Life (EOL), to OpenJDK 17. After this update, the java-17-openjdk packages, which provide the OpenJDK 17 Java Runtime Environment and the OpenJDK 17 Java Software Development Kit, will also provide the java and java-devel packages. For more information, see the OpenJDK documentation . Existing packages in RHEL 9 that call java/bin or java-openjdk/bin directly will be immediately able to use OpenJDK 17. Existing packages in RHEL 9 that require the java or java-devel packages directly, namely tomcat and systemtap-runtime-java , will pull the appropriate dependency automatically. Ant, Maven, and packages that are using Java indirectly through the javapackages-tools package will be fully transitioned in an asynchronous update shortly after the general availability of RHEL 9.5. If you need to install OpenJDK for the first time or if the default package is not installed through a dependency chain, use DNF: For more information, see Installing multiple minor versions of Red Hat build of OpenJDK on RHEL by using yum . Important The current java-11-openjdk packages in RHEL 9 will not receive any further updates. However, Red Hat will provide Extended Life Cycle support (ELS) phase 1 with updates for Red Hat build of OpenJDK 11 until October 31, 2027. See Red Hat build of OpenJDK 11 Extended Lifecycle Support (ELS-1) Availability for details. For information specific to the OpenJDK ELS program and the OpenJDK lifecycle, see the OpenJDK Life Cycle and Support Policy . Note If you have the alternatives command set to manual mode for java and related components, OpenJDK 11 will still be used after the update. To use OpenJDK 17 in this case, change the alternatives setting to auto , for example: Use the alternatives --list command to verify the settings. Jira:RHEL-56094 [1] 4.13. Identity Management python-jwcrypto rebased to version 1.5.6 The python-jwcrypto package has been updated to version 1.5.6. This version includes a security fix to an issue where an attacker could cause a denial of service attack by passing in a malicious JWE Token with a high compression ratio. Jira:RHELDOCS-18197 [1] ansible-freeipa rebased to 1.13.2 The ansible-freeipa package has been rebased from version 1.12.1 to 1.13.2 Notable enhancements include: You can now create an inventory of Identity Management (IdM) servers for ansible-freeipa playbooks dynamically. The freeipa plugin gathers data about the IdM servers in the domain, and selects only those that have a specified IdM server role assigned. For example, if you want to search the logs of all IdM DNS servers in the domain to detect possible issues, the plugin ensures that all IdM replicas with the DNS server role are detected and automatically added to the managed nodes. You can now more efficiently run ansible-freeipa playbooks that use a single Ansible task to add, modify, and delete multiple Identity Management (IdM) users, user groups, hosts, and services. Previously, each entry in a list of users had its dedicated API call. With this enhancement, several API calls are combined into one API call within a task. The same applies to lists of user groups, hosts and services. As a result, the speed of adding, modifying, and deleting these IdM objects by using the ipauser , ipagroup , ipahost and ipaservice modules is increased. The biggest benefit can be seen when the client context is used. ansible-freeipa now additionally provides the roles and modules as an Ansible collection in the ansible-freeipa-collection subpackage. To use the new collection: Install the ansible-freeipa-collection subpackage. Add the freeipa.ansible_freeipa prefix to the names of roles and modules. Use the fully-qualified names to follow Ansible recommendations. For example, to refer to the ipahbacrule module, use freeipa.ansible_freeipa.ipahbacrule . You can simplify the use of the modules that are part of the freeipa.ansible_freeipa collection by applying module_defaults . Jira:RHEL-35565 ipa rebased to version 4.12.0 The ipa package has been updated from version 4.11 to 4.12.0. Notable changes include: You can enforce LDAP authentication to fail for a user that does not provide an OTP token. You can enroll an Identity Management (IdM) client using a trusted Active Directory user. Documentation for identity mapping in FreeIPA is now available. The python-dns package has been rebased to version 2.6.1-1.el10. The ansible-freeipa package has been rebased from version 1.12.1 to 1.13.2. For more information, see the FreeIPA and ansible-freeipa upstream release notes. Jira:RHEL-39140 certmonger rebased to version 0.79.20 The certmonger package has been rebased to version 0.79.20. The update includes various bug fixes and enhancements, most notably: Enhanced handling of new certificates in the internal token and improved the removal process on renewal. Removed restrictions on tokens for CKM_RSA_X_509 cryptographic mechanism. Fixed the documentation for the getcert add-scep-ca , --ca-cert , and --ra-cert options. Renamed the D-Bus service and configuration files to match canonical name. Added missing .TP tags in the getcert-resubmit man page. Migrated to the SPDX license format. Included owner and permissions information in the getcert list output. Removed the requirement for an NSS database in the cm_certread_n_parse function. Added translations using Webplate for Simplified Chinese, Georgian, and Russian. Jira:RHEL-12493 389-ds-base rebased to version 2.5.2 The 389-ds-base package has been updated to version 2.5.2. Notable bug fixes and enhancements over version 2.4.5 include: https://www.port389.org/docs/389ds/releases/release-2-5-2.html Jira:RHEL-31777 Improved MIT krb5 TCP connection timeout handling Previously, TCP connections timed out after 10 seconds. With this update, MIT krb5 TCP connection handling has been modified to no longer use a default timeout. The request_timeout setting now limits the total request duration rather than the duration of individual TCP connections. This change addresses integration issues with SSSD, especially for two-factor authentication use cases. As a result, users experience more consistent handling of TCP connections, as the request_timeout setting now effectively controls the global request maximum duration. Jira:RHEL-17132 [1] 4.14. SSSD samba rebased to version 4.20.2 The samba packages have been upgraded to upstream version 4.20.2, which provides bug fixes and enhancements over the version. The most notable changes are: The smbacls utility can now save and restore discretionary access control list (DACL) entries. This feature mimics the functionality of the Windows icacls.exe utility. Samba now supports conditional access control entries (ACEs). Samba no longer reads currently logged on users from the /var/run/utmp file. This feature was removed from the NetWkstaGetInfo level 102 and NetWkstaEnumUsers level 0 and 1 functions because /var/run/utmp uses a time format that is not year 2038 safe. Note that the server message block version 1 (SMB1) protocol has been deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Red Hat does not support downgrading tdb database files. After updating Samba, use the testparm utility to verify the /etc/samba/smb.conf file. Jira:RHEL-33645 [1] New SSSD option: failover_primary_timeout You can use the failover_primary_timeout option to specify the time interval in seconds for the sssd service to attempt reconnecting to the primary IdM server after switching to a backup server. The default value is 31 seconds. Previously, if the primary server was unavailable, SSSD would automatically switch to a backup server after the fixed timeout of 31 seconds. Jira:RHEL-17659 [1] 4.15. Desktop GNOME Online Accounts can restrict which features providers can use You can use the new goa.conf file in the system configuration directory, usually named /etc/goa.conf , to limit what features each provider can use. In the goa.conf file, the group name defines the provider type, and the keys define boolean switches to disable the individual features. If you do not set any key or section for a feature, the feature is enabled. For example, to disable the mail feature for Google accounts, use the following setting: You can use the all special section name to cover every provider. The value in the specific provider has precedence, if it exists and contains a valid boolean value. Note that some combinations of disabled features can lead to incomplete or invalid accounts being read by the GOA users, such as the Evolution application. Always test the changes first. Restart the GNOME Online Accounts for the changed configuration to take effect. Jira:RHEL-40831 4.16. The web console New package: cockpit-files The cockpit-files package provides the File manager page in the RHEL web console. With the File manager, you can perform the following actions: Browse files and directories on file systems you can access Sort files and directories by various criteria Filter displayed files by a sub-string Copy, move, delete, and rename files and directories Create directories Upload files Bookmark file paths Use keyboard shortcuts for the actions Jira:RHELDOCS-16362 [1] 4.17. Red Hat Enterprise Linux System Roles Support for new ha_cluster system role features The ha_cluster system role now supports the following features: Configuring utilization attributes for node and primitive resources. Configuring node addresses and SBD options by using the ha_cluster_node_options variable. If both ha_cluster_node_options and ha_cluster variables are defined, their values are merged, with values from ha_cluster_node_options having precedence. Configuring access control lists (ACLs). Configuring Pacemaker alerts to take an external action when a cluster event such as node failure or resource starting or stopping occurs. Easy installation of agents for cloud environments by setting the ha_cluster_install_cloud_agents variable to true . Jira:RHEL-30111 , Jira:RHEL-17271, Jira:RHEL-27186 , Jira:RHEL-33532 Support for configuring GFS2 file systems by using RHEL system roles Red Hat Enterprise Linux 9.5 supports the configuration and management of the Red Hat Global File System 2 (GFS2) by using the gfs2 RHEL system role. The role creates GFS2 file systems in a Pacemaker cluster managed with the pcs command-line interface. Previously, setting up GFS2 file systems in a supported configuration required you to follow a long series of steps to configure the storage and cluster resources. The gfs2 role simplifies the process. Using the role, you can specify only the minimum information needed to configure GFS2 file systems in a RHEL high availability cluster. The gfs2 role performs the following tasks: Installing the packages necessary for configuring a GFS2 file system in a Red Hat high availability cluster Setting up the dlm and lvmlockd cluster resources Creating the LVM volume groups and logical volumes required by the GFS2 file system Creating the GFS2 file system and cluster resources with the necessary resource constraints Jira:RHELDOCS-18629 [1] New sudo RHEL system role sudo is a critical part of RHEL system configuration. With the new sudo RHEL system role, you can consistently manage sudo configuration at scale across your RHEL systems. Jira:RHEL-37549 The storage RHEL system role can now manage Stratis pools With this enhancement, you can use the storage RHEL system role to complete the following tasks: Create a new encrypted and unencrypted Stratis pool Add new volumes to the existing Stratis pool Add new disks to the Stratis pool For details on how to manage Stratis pools and other related information, see the resources in the /usr/share/doc/rhel-system-roles/storage/ directory. Jira:RHEL-31854 New variables in the journald RHEL system role: journald_rate_limit_interval_sec and journald_rate_limit_burst The following two variables have been added to the journald RHEL system role: journald_rate_limit_interval_sec (integer, defaults to 30): Configures a time interval in seconds, within which only the journald_rate_limit_burst log messages are handled. The journald_rate_limit_interval_sec variable corresponds to the RateLimitIntervalSec setting in the journald.conf file. journald_rate_limit_burst (integer, defaults to 10 000): Configures the upper limit of log messages, which are handled within the time defined by journald_rate_limit_interval_sec . The journald_rate_limit_burst variable corresponds to the RateLimitBurst setting in the journald.conf file. As a result, you can use these settings to tune the performance of the journald service to handle applications that log many messages in a short period of time. For more details, see the resources in the /usr/share/doc/rhel-system-roles/journald/ directory. Jira:RHEL-30170 New variables in the podman RHEL system role: podman_registry_username and podman_registry_password The podman RHEL system role now enables you to specify the container image registry credentials either globally or on a per-specification basis. For that purpose, you must configure both role variables: podman_registry_username (string, defaults to unset): Configures the username for authentication with the container image registry. You must also set the podman_registry_password variable. You can override podman_registry_username on a per-specification basis with the registry_username variable. Each operation involving credentials would then be performed according to the detailed rules and protocols defined in that specification. podman_registry_password (string, defaults to unset): Configures the password for authentication with the container image registry. You must also set the podman_registry_username variable. You can override podman_registry_password on a per-specification basis with the registry_password variable. Each operation involving credentials would then be performed according to the detailed rules and protocols defined in that specification. For security, encrypt the password using the Ansible Vault feature. As a result, you can use the podman RHEL system role to manage containers with images, whose registries require authentication for access. For more details, see the resources in the /usr/share/doc/rhel-system-roles/podman/ directory. Jira:RHEL-30185 New variable in the postfix RHEL system role: postfix_files The postfix RHEL system role now enables you to configure extra files for the Postfix mail transfer agent. For that purpose, you can use the following role variable: postfix_files Defines a list of files to be placed in the /etc/postfix/ directory that can be converted into Postfix Lookup Tables if needed. This variable enables you to configure Simple Authentication and Security Layer (SASL) credentials, and similar. For security, encrypt files that contain credentials and other secrets using the Ansible Vault feature. As a result, you can use the postfix RHEL system role to create these extra files and integrate them in your Postfix configuration. For more details, see the resources in the /usr/share/doc/rhel-system-roles/postfix/ directory. Jira:RHEL-46854 The snapshot RHEL system role now supports managing snapshots of LVM thin pools With thin provisioning, you can use the snapshot RHEL system role to manage snapshots of LVM thin pools. These thin snapshots are space-efficient and only grow as data is written or modified after the snapshot is taken. The role automatically detects if the specified volume is scheduled for a thin pool. The added feature could be useful in environments where you need to take frequent snapshots without consuming much physical storage. Jira:RHEL-48227 New option in the logging RHEL system role: reopen_on_truncate The files input type of the logging_inputs variable now supports the following option: reopen_on_truncate (boolean, defaults to false) Configures the rsyslog service to re-open the input log file if it was truncated, such as during log rotation. The reopen_on_truncate role option corresponds to the reopenOnTruncate parameter for rsyslog . As a result, you can configure rsyslog in an automated fashion through the logging RHEL system role to re-open an input log file if it was truncated. For more details, see the resources in the /usr/share/doc/rhel-system-roles/logging/ directory. Jira:RHEL-46590 [1] New variable in the logging RHEL system role: logging_custom_config_files You can provide custom logging configuration files by using the following variable for the logging RHEL system role: logging_custom_config_files (list) Configures a list of configuration files to copy to the default logging configuration directory. For example, for the rsyslog service it is the /etc/rsyslog.d/ directory. This assumes the default logging configuration loads and processes the configuration files in that directory. The default rsyslog configuration has a directive such as USDIncludeConfig /etc/rsyslog.d/*.conf . As a result, you can use customized configurations not provided by the logging RHEL system role. For more details, see the resources in the /usr/share/doc/rhel-system-roles/logging/ directory. Jira:RHEL-40273 The logging RHEL system role can set ownership and permissions for rsyslog files and directories The files output type of the logging_outputs variable now supports the following options: mode (raw, defaults to null): Configures the FileCreateMode parameter associated with the omfile module in the rsyslog service. owner (string, defaults to null): Configures the fileOwner or fileOwnerNum parameter associated with the omfile module in rsyslog . If the value is an integer, it sets fileOwnerNum . Otherwise, it sets fileOwner . group (string, defaults to null): Configures the fileGroup or fileGroupNum parameter associated with the omfile module in rsyslog . If the value is an integer, it sets fileGroupNum . Otherwise, it sets fileGroup . dir_mode (defaults to null): Configures the DirCreateMode parameter associated with the omfile module in rsyslog . dir_owner (defaults to null): Configures the dirOwner or dirOwnerNum parameter associated with the omfile module in rsyslog . If the value is an integer, it sets dirOwnerNum . Otherwise, it sets dirOwner . dir_group (defaults to null): Configures the dirGroup or dirGroupNum parameter associated with the omfile module in rsyslog . If the value is an integer, it sets dirGroupNum . Otherwise, it sets dirGroup . As a result, you can set ownership and permissions for files and directories created by rsyslog . Note that the file or directory properties are the same as the corresponding variables in the Ansible file module. For more details, see the resources in the /usr/share/doc/rhel-system-roles/logging/ directory. Alternatively, review the output of the ansible-doc file command. Jira:RHEL-34935 [1] Using the storage RHEL system role creates fingerprints on managed nodes If not already present, storage creates a unique identifier (fingerprint) every time you run this role. The fingerprint has the form of the # system_role:storage string written to the /etc/fstab file on your managed nodes. As a result, you can track which nodes are managed by storage . Jira:RHEL-30888 New variables in the podman RHEL system role: podman_registry_certificates and podman_validate_certs The following two variables have been added to the podman RHEL system role: podman_registry_certificates (list of dictionary elements): Enables you to manage TLS certificates and keys used to connect to the specified container image registry. podman_validate_certs (boolean, defaults to null): Controls whether pulling images from container image registries will validate TLS certificates or not. The default null value means that it is used whatever the default configured by the containers.podman.podman_image module is. You can override the podman_validate_certs variable on a per-specification basis with the validate_certs variable. As a result, you can use the podman RHEL system role to configure TLS settings for connecting to container image registries. For more details, see the resources in the /usr/share/doc/rhel-system-roles/podman/ directory. Alternatively, you can review the containers-certs(5) manual page. Jira:RHEL-33547 New variable in the podman RHEL system role: podman_credential_files Some operations need to pull container images from registries in an automated or unattended way and cannot use the podman_registry_username and podman_registry_password variables. Therefore, the podman RHEL system role now accepts the containers-auth.json file to authenticate against container image registries. For that purpose, you can use the following role variable: podman_credential_files (list of dictionary elements) Each dictionary element in the list defines a file with user credentials for authentication to private container image registries. For security, encrypt these credentials using the Ansible Vault feature. You can specify file name, mode, owner, group of the file, and can specify the contents in different ways. See the role documentation for more details. As a result, you can input container image registry credentials for automated and unattended operations. For more details, see the resources in the /usr/share/doc/rhel-system-roles/podman/ directory. Alternatively, you can review the containers-auth.json(5) and containers-registries.conf(5) manual pages. Jira:RHEL-30183 The nbde_client RHEL system role now enables you to skip running certain configurations With the nbde_client RHEL system role you can now disable the following mechanisms: Initial ramdisk NetworkManager flush module Dracut flush module The clevis-luks-askpass utility unlocks some storage volumes late in the boot process after the NetworkManager service puts the operating system on the network. Therefore, no configuration changes to the mentioned mechanisms are necessary. As a result, you can disable the mentioned configurations from being run to support advanced networking setups, or volume decryption to occur late in the boot process. Jira:RHEL-45717 The ssh RHEL system role now recognizes the ObscureKeystrokeTiming and ChannelTimeout configuration options The ssh RHEL system role has been updated to reflect addition of the following configuration options in the OpenSSH utility suite: ObscureKeystrokeTiming (yes|no|interval specifier, defaults to 20): Configures whether the ssh utility should obscure the inter-keystroke timings from passive observers of network traffic. ChannelTimeout : Configures whether and how quickly the ssh utility should close inactive channels. When using the ssh RHEL system role, you can use the new options such as in this example play: Jira:RHEL-40180 The src parameter was added to the network RHEL system role The src parameter to the route sub-option of the ip option for the network_connections variable has been added. This parameter specifies the source IP address for a route. Typically, it is useful for the multi-WAN connections. These setups ensure that a machine has multiple public IP addresses, and outbound traffic uses a specific IP address tied to a particular network interface. As a result, support for the src parameter provides better control over traffic routing by ensuring a more robust and flexible network configuration capability in the described scenarios. For more details, see the resources in the /usr/share/doc/rhel-system-roles/network/ directory. Jira:RHEL-3252 The storage RHEL system role can now resize LVM physical volumes If the size of a block device has changed and you use this device in an LVM, you can adjust the LVM physical volume as well. With this enhancement, you can use the storage RHEL system role to resize LVM physical volumes to match the size of the underlying block devices after you resized it. To enable automatic resizing, set grow_to_fill: true on the pool in your playbook. Jira:RHEL-14862 4.18. Virtualization New features for 64-bit ARM hosts The following virtualization features have now become fully supported on the 64-bit ARM architecture: 4 KiB memory page size virtual machines (VMs) on 4kiB memory page size hosts. Note that hosts and guests with different page sizes are still not supported. The only supported page size combinations are 4 KiB/4 KiB and 64 KiB/64 KiB. The virtiofs feature for sharing files between the host and the VM Guest error RAS recovery (Reliability, Availability, Serviceability) The pvpanic event logging device The virtio-mem feature for dynamic memory assignment As a result, VMs hosted on RHEL 9 running on an 64-bit ARM system will be able to use these features. Jira:RHEL-43234 [1] RHEL supports live migrating VMs with attached NVIDIA vGPUs With this update, you can now live migrate a running virtual machine with attached vGPUs to another KVM host. Currently, this is only possible with NVIDIA GPUs. This functionality is available only with certain NVIDIA Virtual GPU Software Driver versions. Refer to the relevant NVIDIA vGPU documentation for more details. Jira:RHELDOCS-16572 [1] nbdkit rebased to version 1.38 The nbdkit package has been rebased to upstream version 1.38, which provides various bug fixes and enhancements. The most notable changes are the following: Block size advertising has been enhanced and a new read-only filter has been added. The Python and OCaml bindings support more features of the server API. Internal struct integrity checks have been added to make the server more robust. For a complete list of changes, see the upstream release notes . Jira:RHEL-31884 Adjustable packet loss prevention added for the NetKVM driver This update adds the MinRxBufferPercent parameter for the NetKVM driver, which you can use to reduce the risk of received packet loss in Windows virtual machines. The default value of MinRxBufferPercent is 0, and setting a higher value, up to 100, improves the prevention of packet loss, but might increase CPU consumption during high network traffic. Jira:RHEL-19627 4.19. RHEL in cloud environments OpenTelemetry Collector is available for RHEL on AWS While running RHEL on Amazon Web Services (AWS), you can now use the OpenTelemetry (OTel) framework to collect and send telemetry data, for example, logs. You can maintain and debug the RHEL cloud instances by using the OTel framework. With this update, RHEL includes the OTel Collector service, which you can use to manage logs. The OTel Collector gathers, processes, transforms, and exports logs to and from various formats and external back ends. You can also use the OTel Collector to aggregate the collected data and generate metrics useful for analytics services. For example, you can configure OTel Collector to send data to Amazon Web Services (AWS) CloudWatch, which enhances the scope and accuracy of data obtained by CloudWatch from RHEL instances. For details, see Configuring the OpenTelemetry Collector for RHEL on public cloud platforms . Jira:RHELDOCS-18125 [1] awscli2 is generally available for RHEL on AWS With the awscli2 utility, you can now use Amazon Web Services (AWS) APIs from a RHEL instance to deploy new infrastructure offerings, and manage existing deployments. Note that installing awscli2 from a Red Hat Enterprise Linux repository ensures that awscli2 is installed from a trusted source and receives automatic updates. As a result, you can gather information regarding cloud deployment services, manage infrastructure resources, and refer to built-in documentation provided with awscli2 . Jira:RHEL-14523 [1] Log collection on Azure is now disabled by default Previously, the Windows Azure Linux Agent (WALA) in Microsoft Azure collected debugging logs on virtual machines (VMs) by default. However, these agent logs might contain confidential information. To improve data security, WALA is now disabled by default, and does not collect any data on the VM. To re-enable log collection, do the following: Edit the /etc/waagent.conf file. Set the Logs.Collect parameter value to y . Jira:RHEL-7273 [1] 4.20. Supportability The --api-url option is now available With the --api-url option you can call another API according to the requirements. For example, the API for an OCP cluster. Example: sos collect --cluster-type=ocp --cluster-option ocp.api-url=_<API_URL> --alloptions . Jira:RHEL-24523 The new --skip-cleaning-files option is now available The --skip-cleaning-files option for the sos report command allows you to skip cleaning selected files. The option supports globs and wildcards. Example: sos report -o host --batch --clean --skip-cleaning-files ' hostname ' . Jira:RHEL-30893 [1] The plugin option names now use only hyphens instead of underscores To ensure consistency across sos global options, the plugin option names now use only hyphens instead of underscores For example, the networking plugin namespace_pattern option is now namespace-pattern and must be specified by using the --plugin-option networking.namespace-pattern=<pattern> syntax. Jira:RHELDOCS-18655 [1] 4.21. Containers Image mode for RHEL now supports FIPS mode With this enhancement, you can enable the FIPS mode when building a bootc image to configure the system to use only FIPS-approved modules. You can use bootc-image-builder , which requires enabling the FIPS crypto policy in the Containerfile configuration, or use the RHEL Anaconda installation, that additionally to enabling FIPS mode in the Containerfile, also requires adding the fips=1 kernel argument when booting the system installation. See Installing the system with FIPS mode enabled for more details. The following is a Containerfile with instructions to enable the fips=1 kernel argument: Jira:RHELDOCS-18585 [1] Image mode for RHEL now supports logically bound app images With this enhancement, you have support for container images that are lifecycle bound to the base bootc image. This helps unite different operational processes for applications and operating systems and the app images are referenced from the base image as image files or an equivalent. As a result, you can manage multiple container images for system installations, for example, for a disconnected installation, the system must all be mirrored, not just one. Jira:RHELDOCS-18666 [1] Podman and Buildah support adding OCI artifacts to image indexes With this update, you can create artifact manifests and add them to image indexes. The buildah manifest add command now supports the following options: the --artifact option to create artifact manifests the --artifact-type , --artifact-config-type , --artifact-layer-type , --artifact-exclude-titles , and --subject options to adjust the contents of the artifact manifests it creates. The buildah manifest annotate command now supports the following options: the --index option to set annotations on the index itself instead of a one of the entries in the image index the --subject option for setting the subject field of an image index. The buildah manifest create command now supports the --annotation option to add annotations to the new image index. Jira:RHEL-33572 Option is available to disable Podman health check event This enhancement adds a new healthcheck_events option in the containers.conf configuration file under the [engine] section to disable the generation of health_status events. Set healthcheck_events=false to disable logging health check events. Jira:RHEL-34603 Runtime resource changes in Podman are persistent The updates of container configuration by using the podman update command are persistent. Note that this enhancement is for both SQLite and BoltDB database backends. Jira:RHEL-33567 Building multi-architecture images is fully supported The podman farm build command that creates multi-architecture container images s now fully supported. A farm is a group of machines that have a UNIX Podman socket running in them. The nodes in the farm can have different machines of various architectures. The podman farm build command is faster than the podman build --arch --platform command. You can use podman farm build to perform the following actions: Build an image on all nodes in a farm. Bundle an image on all nodes in a farm up into a manifest list. Run the podman build command on all the farm nodes. Push the images to the registry specified by using the --tag option. Locally create a manifest list. Push the manifest list to the registry. The manifest list contains one image per native architecture type present in the farm. Jira:RHEL-34609 Quadlets for pods in Podman are available Beginning with Podman v5.0, you can use Quadlet to automatically generate a systemd service file from a pod description. Jira:RHEL-33574 The Podman v2.0 RESTful API has been updated The new fields has been added to the libpod/images/json endpoint: The isManifest boolean field to determine if the target is a manifest or not. The libpod endpoint returns both images and manifest lists. The os and arch fields for image listing. Jira:RHEL-34612 Kubernetes YAML now supports a data volume container as an init container A list of images to automatically mount as volumes can now be specified in Kubernetes YAML by using the "io.podman.annotations.kube.image.automount/USDctrname" annotation. Image-based mounts using podman run --mount type=image,source=<image>,dst=<path>,subpath=<path> now support a new option, subpath , to mount only part of the image into the container. Jira:RHEL-34605 The Container Tools packages have been updated The updated Container Tools RPM meta-package, which contains the Podman, Buildah, Skopeo, crun , and runc tools, is now available. Podman v5.0 contains the following notable bug fixes and enhancements over the version: The podman manifest add command now supports a new --artifact option to add OCI artifacts to a manifest list. The podman create , podman run , and podman push commands now support the --retry and --retry-delay options to configure retries for pushing and pulling images. The podman run and podman exec commands now support the --preserve-fd option to pass a list of file descriptors into the container. It is an alternative to --preserve-fds , which passes a specific number of file descriptors. Quadlet now supports templated units. The podman kube play command can now create image-based volumes by using the volume.podman.io/image annotation. Containers created with the podman kube play command can now include volumes from other containers by using a new annotation, io.podman.annotations.volumes-from . Pods created with the podman kube play command can now set user namespace options by using the io.podman.annotations.userns annotation in the pod definition. The --gpus option to podman create and podman run is now compatible with Nvidia GPUs. The --mount option to podman create and podman run supports a new mount option, no-dereference , to mount a symlink instead of its de-referenced target into a container. Podman now supports the new --config global option to point to a Docker configuration where registry login credentials can be sourced. The podman ps --format command now supports the new .Label format specifier. The uidmapping and gidmapping options to the podman run --userns=auto option can now map to host IDs by prefixing host IDs with the @ symbol. Quadlet now supports systemd-style drop-in directories. Quadlet now supports creating pods by using the new .pod unit files. Quadlet now supports two new keys, Entrypoint and StopTimeout , in .container files. Quadlet now supports specifying the Ulimit key multiple times in .container files to set more than one ulimit on a container. Quadlet now supports setting the Notify key to healthy in .container files, to only notify that a container has started when its health check begins passing. The output of the podman inspect command for containers has changed. The Entrypoint field changes from a string to an array of strings and StopSignal from an integer to a string. The podman inspect command for containers now returns nil for health checks when inspecting containers without health checks. It is no longer possible to create new BoltDB databases. Attempting to do so results in an error. All new Podman installations now use the SQLite database backend. Existing BoltDB databases remain usable. Support for CNI networking is gated by a build tag and is not enabled by default. Podman now prints warnings when used on cgroups v1 systems. Support for cgroups v1 is deprecated and will be removed in a future release. You can set the PODMAN_IGNORE_CGROUPSV1_WARNING environment variable to suppress warnings. Network statistics sent over the Docker-compatible API are now per-interface, and not aggregated, which improves Docker compatibility. The default tool for rootless networking has been changed from slirp4netns to pasta for improved performance. As a result, networks named pasta are no longer supported. Using multiple filters with the List Images REST API now combines the filters with AND instead of OR, improving Docker compatibility. The parsing for several Podman CLI options which accept arrays has been changed to no longer accept string-delimited lists, and instead to require the option to be passed multiple times. These options are: The --annotation option to podman manifest annotate and podman manifest add The --configmap , --log-opt , and --annotation options to podman kube play The --pubkeysfile option to podman image trust set The --encryption-key and --decryption-key options to podman create , podman run , podman push and podman pull The --env-file option to podman exec , the --bkio-weight-device , --device-read-bps , --device-write-bps , --device-read-iops , --device-write-iops , --device , --label-file , --chrootdirs , --log-opt , --env-file options to podman create and podman run The --hooks-dir and --module global options The podman system reset command no longer waits for running containers to stop, and instead immediately sends the SIGKILL signal. The podman network inspect command now includes running containers that use the network in its output. The podman compose command is now supported on other architectures in addition to AMD and Intel 64-bit architectures (x86-64-v2) and the 64-bit ARM architecture (ARMv8.0-A).. The --no-trunc option to the podman kube play and podman kube generate commands has been deprecated. Podman now complies to the Kubernetes specification for annotation size, which removes the need for this option. Connections from the podman system connection command and farms from the podman farm command are now written to a new configuration file called podman-connections.conf file. As a result, Podman no longer writes to the containers.conf file. Podman still respects existing connections from containers.conf . Most podman farm subcommands no longer need to connect to the machines in the farm to run. The podman create and podman run commands no longer require specifying an entrypoint on the command line when the container image does not define one. In this case, an empty command is passed to the OCI runtime, and the resulting behavior is runtime-specific. A new API endpoint, /libpod/images/USDname/resolve , has been added to resolve a potential short name to a list of fully-qualified image references Podman, which you can use to pull the image. For more information about notable changes, see upstream release notes . Jira:RHEL-32714 The --compat-volumes option is available for Podman and Buildah You can use the new --compat-volumes option with the buildah build , podman build , and podman farm build commands. This option triggers special handling for the contents of directories marked using the VOLUME instruction such that their contents can subsequently only be modified by ADD and COPY instructions. Any changes made in those locations by RUN Instructions will be discarded. Previously, this behavior was the default, but it is now disabled by default. Jira:RHEL-52239 A new rhel10-beta/rteval container image The real-time registry.redhat.io/rhel10-beta/rteval container image is now available in the Red Hat Container Registry to run latency analysis on either a standalone RHEL installation. With rhel10-beta/rteval container image, you can perform latency testing within a containerized setup to determine if such a solution is viable for your real-time workloads or to compare results against a bare metal run of rteval . To use this feature, subscribe to RHEL with real-time support. No tuning guidelines are provided. Jira:RHELDOCS-18522 [1] The containers.conf file is now read-only The system connections and farm information stored in the containers.conf file is now read-only. The system connections and farm information will now be stored in the podman.connections.json file, managed only by Podman. Podman continues to support the old configuration options such as [engine.service_destinations] and the [farms] section. You can still add connections or farms manually if needed however, it is not possible to delete a connection from the containers.conf file with the podman system connection rm command. You can still manually edit the containers.conf file if needed. System connections that were added by Podman v4.0 remain unchanged after the upgrade to Podman v5.0. Jira:RHEL-40637 macvlan and ipvlan network interface names are configurable in containers.conf To specify macvlan and ipvlan networks, you can adjust the name of the network interface created inside containers by using the new interface_name field in the containers.conf configuration file. Jira:RHELDOCS-18769 [1] bootc-image-builder now supports defining and injecting custom Kickstart files to ISO builds With this enhancement, now you can define a Kickstart by setting users, customize partitioning, inject key, and inject the Kickstart file to an ISO build to configure the installation process. The resulting disk image creates a self-contained installer that automates and deploys devices, disconnected systems, edge devices, between others. As a result, it is much easier to create customized media with bootc-image-builder . Jira:RHELDOCS-18734 [1] Support to building GCP images by using bootc-image-builder By using the bootc-image-builder tool you can now generate .gce disk images and provision the instances on the Google Compute Engine (GCE) platform. Jira:RHELDOCS-18472 [1] Support to creating and deploying VMDK with bootc-image-builder With this enhancement, now you can create a Virtual Machine Disk (VMDK) from a bootc image, by using the bootc-image-builder tool, and deploy VMDK images to VMware vSphere. Jira:RHELDOCS-18398 [1] The podman pod inspect command now provides a JSON array regardless of the number of pods Previously, the podman pod inspect command omitted the JSON array when inspecting a single pod. With this update, the podman pod inspect command now produces a JSON array in the output regardless of the number of pods inspected. Jira:RHELDOCS-18770 [1]
[ "IKE={AES_CBC,3DES_CBC}-{HMAC_SHA2_256,HMAC_SHA2_512HMAC_SHA1}-{MODP2048,MODP1536,DH19,DH31} ESP={AES_CBC,3DES_CBC}-{HMAC_SHA1_96,HMAC_SHA2_512_256,HMAC_SHA2_256_128}-{AES_GCM_16_128,AES_GCM_16_256} AH=HMAC_SHA1_96+HMAC_SHA2_512_256+HMAC_SHA2_256_128", "sudo systemctl enable intel_lpmd.service", "sudo systemctl start intel_lpmd.service", "--- interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 next-hop-interface: eth1 table-id: 254 cwnd: 20", "--- interfaces: - name: hosta_conn type: ipsec ipv4: enabled: true dhcp: true libreswan: left: 192.0.2.1 leftid: '%fromcert' leftrsasigkey: '%cert' leftmodecfgclient: false leftcert: leftcert.example.com right: 192.0.2.2 rightid: '%fromcert' rightrsasigkey: '%cert' rightcert: rightcert.example.com rightsubnet: 192.0.2.2/32", "interfaces: - name: hosta type: ipsec ipv4: enabled: true dhcp: true libreswan: left: 192.0.2.246 leftid: _<hosta.example.org>_ leftcert: _<hosta.example.org>_ leftsubnet: 192.0.4.0/24 leftmodecfgclient: no right: 192.0.2.157 rightid: _<hostb.example.org>_ rightsubnet: 192.0.3.0/24 ikev2: insist", "dnf module install nodejs:22", "export GLIBC_TUNABLES=glibc.cpu.prefer_map_32bit_exec=1", "dnf install gcc-toolset-14", "scl enable gcc-toolset-14 <tool>", "scl enable gcc-toolset-14 bash", "dnf install java-17-openjdk-devel", "alternatives --auto java alternatives --auto javac", "[google] mail=false", "--- - name: Non-exclusive sshd configuration hosts: managed-node-01.example.com tasks: - name: Configure ssh to obscure keystroke timing and set 5m session timeout ansible.builtin.include_role: name: rhel-system-roles.ssh vars: ssh_ObscureKeystrokeTiming: \"interval:80\" ssh_ChannelTimeout: \"session=5m\"", "FROM registry.redhat.io/rhel9/rhel-bootc:latest# Enable fips=1 kernel argument: https://containers.github.io/bootc/building/kernel-arguments.html COPY 01-fips.toml /usr/lib/bootc/kargs.d/ Install and enable the FIPS crypto policy RUN dnf install -y crypto-policies-scripts && update-crypto-policies --no-reload --set FIPS" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.5_release_notes/new-features
A.9. Enabling Intel VT-x and AMD-V Virtualization Hardware Extensions in BIOS
A.9. Enabling Intel VT-x and AMD-V Virtualization Hardware Extensions in BIOS Note To expand your expertise, you might also be interested in the Red Hat Virtualization (RH318) training course. This section describes how to identify hardware virtualization extensions and enable them in your BIOS if they are disabled. The Intel VT-x extensions can be disabled in the BIOS. Certain laptop vendors have disabled the Intel VT-x extensions by default in their CPUs. The virtualization extensions cannot be disabled in the BIOS for AMD-V. See the following section for instructions on enabling disabled virtualization extensions. Verify the virtualization extensions are enabled in BIOS. The BIOS settings for Intel VT or AMD-V are usually in the Chipset or Processor menus. The menu names may vary from this guide, the virtualization extension settings may be found in Security Settings or other non standard menu names. Procedure A.3. Enabling virtualization extensions in BIOS Reboot the computer and open the system's BIOS menu. This can usually be done by pressing the delete key, the F1 key or Alt and F4 keys depending on the system. Enabling the virtualization extensions in BIOS Note Many of the steps below may vary depending on your motherboard, processor type, chipset and OEM. See your system's accompanying documentation for the correct information on configuring your system. Open the Processor submenu The processor settings menu may be hidden in the Chipset , Advanced CPU Configuration or Northbridge . Enable Intel Virtualization Technology (also known as Intel VT-x). AMD-V extensions cannot be disabled in the BIOS and should already be enabled. The virtualization extensions may be labeled Virtualization Extensions , Vanderpool or various other names depending on the OEM and system BIOS. Enable Intel VT-d or AMD IOMMU, if the options are available. Intel VT-d and AMD IOMMU are used for PCI device assignment. Select Save & Exit . Reboot the machine. When the machine has booted, run grep -E "vmx|svm" /proc/cpuinfo . Specifying --color is optional, but useful if you want the search term highlighted. If the command outputs, the virtualization extensions are now enabled. If there is no output your system may not have the virtualization extensions or the correct BIOS setting enabled.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Troubleshooting-Enabling_Intel_VT_x_and_AMD_V_virtualization_hardware_extensions_in_BIOS
13.2.6. Configuring Services: PAM
13.2.6. Configuring Services: PAM Warning A mistake in the PAM configuration file can lock users out of the system completely. Always back up the configuration files before performing any changes, and keep a session open so that any changes can be reverted. SSSD provides a PAM module, sssd_pam , which instructs the system to use SSSD to retrieve user information. The PAM configuration must include a reference to the SSSD module, and then the SSSD configuration sets how SSSD interacts with PAM. Procedure 13.3. Configuring PAM Use authconfig to enable SSSD for system authentication. This automatically updates the PAM configuration to reference all of the SSSD modules: These modules can be set to include statements, as necessary. Open the sssd.conf file. Make sure that PAM is listed as one of the services that works with SSSD. In the [pam] section, change any of the PAM parameters. These are listed in Table 13.3, "SSSD [pam] Configuration Parameters" . Restart SSSD. Table 13.3. SSSD [pam] Configuration Parameters Parameter Value Format Description offline_credentials_expiration integer Sets how long, in days, to allow cached logins if the authentication provider is offline. This value is measured from the last successful online login. If not specified, this defaults to zero ( 0 ), which is unlimited. offline_failed_login_attempts integer Sets how many failed login attempts are allowed if the authentication provider is offline. If not specified, this defaults to zero ( 0 ), which is unlimited. offline_failed_login_delay integer Sets how long to prevent login attempts if a user hits the failed login attempt limit. If set to zero ( 0 ), the user cannot authenticate while the provider is offline once he hits the failed attempt limit. Only a successful online authentication can re-enable offline authentication. If not specified, this defaults to five ( 5 ).
[ "authconfig --update --enablesssd --enablesssdauth", "#%PAM-1.0 This file is auto-generated. User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_sss.so use_first_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_sss.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password sufficient pam_sss.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session sufficient pam_sss.so session required pam_unix.so", "vim /etc/sssd/sssd.conf", "[sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss, pam", "[pam] reconnection_retries = 3 offline_credentials_expiration = 2 offline_failed_login_attempts = 3 offline_failed_login_delay = 5", "~]# service sssd restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/Configuration_Options-PAM_Configuration_Options
Chapter 94. Paho
Chapter 94. Paho Both producer and consumer are supported Paho component provides connector for the MQTT messaging protocol using the Eclipse Paho library . Paho is one of the most popular MQTT libraries, so if you would like to integrate it with your Java project - Camel Paho connector is a way to go. 94.1. Dependencies When using paho with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-starter</artifactId> </dependency> 94.2. URI format paho:topic[?options] Where topic is the name of the topic. 94.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 94.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 94.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 94.4. Component Options The Paho component supports 31 options, which are listed below. Name Description Default Type automaticReconnect (common) Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true boolean brokerUrl (common) The URL of the MQTT broker. tcp://localhost:1883 String cleanSession (common) Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true boolean clientId (common) MQTT client identifier. The identifier must be unique. String configuration (common) To use the shared Paho configuration. PahoConfiguration connectionTimeout (common) Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 int filePersistenceDirectory (common) Base directory used by file persistence. Will by default use user directory. String keepAliveInterval (common) Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 int maxInflight (common) Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. 10 int maxReconnectDelay (common) Get the maximum time (in millis) to wait between reconnects. 128000 int mqttVersion (common) Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. int persistence (common) Client persistence to be used - memory or file. Enum values: FILE MEMORY MEMORY PahoPersistence qos (common) Client quality of service level (0-2). 2 int retained (common) Retain option. false boolean serverURIs (common) Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String willPayload (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String willQos (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. int willRetained (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. false boolean willTopic (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean client (advanced) To use a shared Paho client. MqttClient customWebSocketHeaders (advanced) Sets the Custom WebSocket Headers for the WebSocket Connection. Properties executorServiceTimeout (advanced) Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 int httpsHostnameVerificationEnabled (security) Whether SSL HostnameVerifier is enabled or not. The default value is true. true boolean password (security) Password to be used for authentication against the MQTT broker. String socketFactory (security) Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. SocketFactory sslClientProps (security) Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. Properties sslHostnameVerifier (security) Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. HostnameVerifier userName (security) Username to be used for authentication against the MQTT broker. String 94.5. Endpoint Options The Paho endpoint is configured using URI syntax: with the following path and query parameters: 94.5.1. Path Parameters (1 parameters) Name Description Default Type topic (common) Required Name of the topic. String 94.5.2. Query Parameters (31 parameters) Name Description Default Type automaticReconnect (common) Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true boolean brokerUrl (common) The URL of the MQTT broker. tcp://localhost:1883 String cleanSession (common) Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true boolean clientId (common) MQTT client identifier. The identifier must be unique. String connectionTimeout (common) Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 int filePersistenceDirectory (common) Base directory used by file persistence. Will by default use user directory. String keepAliveInterval (common) Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 int maxInflight (common) Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. 10 int maxReconnectDelay (common) Get the maximum time (in millis) to wait between reconnects. 128000 int mqttVersion (common) Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. int persistence (common) Client persistence to be used - memory or file. Enum values: FILE MEMORY MEMORY PahoPersistence qos (common) Client quality of service level (0-2). 2 int retained (common) Retain option. false boolean serverURIs (common) Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String willPayload (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String willQos (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. int willRetained (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. false boolean willTopic (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean client (advanced) To use an existing mqtt client. MqttClient customWebSocketHeaders (advanced) Sets the Custom WebSocket Headers for the WebSocket Connection. Properties executorServiceTimeout (advanced) Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 int httpsHostnameVerificationEnabled (security) Whether SSL HostnameVerifier is enabled or not. The default value is true. true boolean password (security) Password to be used for authentication against the MQTT broker. String socketFactory (security) Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. SocketFactory sslClientProps (security) Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. Properties sslHostnameVerifier (security) Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. HostnameVerifier userName (security) Username to be used for authentication against the MQTT broker. String 94.6. Headers The following headers are recognized by the Paho component: Header Java constant Endpoint type Value type Description CamelMqttTopic PahoConstants.MQTT_TOPIC Consumer String The name of the topic CamelMqttQoS PahoConstants.MQTT_QOS Consumer Integer QualityOfService of the incoming message CamelPahoOverrideTopic PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC Producer String Name of topic to override and send to instead of topic specified on endpoint 94.7. Default payload type By default Camel Paho component operates on the binary payloads extracted out of (or put into) the MQTT message: // Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody("paho:topic"); // Send payload byte[] payload = "message".getBytes(); producerTemplate.sendBody("paho:topic", payload); But of course Camel build-in type conversion API can perform the automatic data type transformations for you. In the example below Camel automatically converts binary payload into String (and conversely): // Receive payload String payload = consumerTemplate.receiveBody("paho:topic", String.class); // Send payload String payload = "message"; producerTemplate.sendBody("paho:topic", payload); 94.8. Samples For example the following snippet reads messages from the MQTT broker installed on the same host as the Camel router: from("paho:some/queue") .to("mock:test"); While the snippet below sends message to the MQTT broker: from("direct:test") .to("paho:some/target/queue"); For example this is how to read messages from the remote MQTT broker: from("paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883") .to("mock:test"); And here we override the default topic and set to a dynamic topic from("direct:test") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("USD{header.customerId}")) .to("paho:some/target/queue"); 94.9. Spring Boot Auto-Configuration The component supports 32 options, which are listed below. Name Description Default Type camel.component.paho.automatic-reconnect Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true Boolean camel.component.paho.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.paho.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.paho.broker-url The URL of the MQTT broker. tcp://localhost:1883 String camel.component.paho.clean-session Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true Boolean camel.component.paho.client To use a shared Paho client. The option is a org.eclipse.paho.client.mqttv3.MqttClient type. MqttClient camel.component.paho.client-id MQTT client identifier. The identifier must be unique. String camel.component.paho.configuration To use the shared Paho configuration. The option is a org.apache.camel.component.paho.PahoConfiguration type. PahoConfiguration camel.component.paho.connection-timeout Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 Integer camel.component.paho.custom-web-socket-headers Sets the Custom WebSocket Headers for the WebSocket Connection. The option is a java.util.Properties type. Properties camel.component.paho.enabled Whether to enable auto configuration of the paho component. This is enabled by default. Boolean camel.component.paho.executor-service-timeout Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 Integer camel.component.paho.file-persistence-directory Base directory used by file persistence. Will by default use user directory. String camel.component.paho.https-hostname-verification-enabled Whether SSL HostnameVerifier is enabled or not. The default value is true. true Boolean camel.component.paho.keep-alive-interval Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 Integer camel.component.paho.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.paho.max-inflight Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. 10 Integer camel.component.paho.max-reconnect-delay Get the maximum time (in millis) to wait between reconnects. 128000 Integer camel.component.paho.mqtt-version Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. Integer camel.component.paho.password Password to be used for authentication against the MQTT broker. String camel.component.paho.persistence Client persistence to be used - memory or file. PahoPersistence camel.component.paho.qos Client quality of service level (0-2). 2 Integer camel.component.paho.retained Retain option. false Boolean camel.component.paho.server-u-r-is Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String camel.component.paho.socket-factory Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. The option is a javax.net.SocketFactory type. SocketFactory camel.component.paho.ssl-client-props Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. The option is a java.util.Properties type. Properties camel.component.paho.ssl-hostname-verifier Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. The option is a javax.net.ssl.HostnameVerifier type. HostnameVerifier camel.component.paho.user-name Username to be used for authentication against the MQTT broker. String camel.component.paho.will-payload Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String camel.component.paho.will-qos Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. Integer camel.component.paho.will-retained Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. false Boolean camel.component.paho.will-topic Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-starter</artifactId> </dependency>", "paho:topic[?options]", "paho:topic", "// Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody(\"paho:topic\"); // Send payload byte[] payload = \"message\".getBytes(); producerTemplate.sendBody(\"paho:topic\", payload);", "// Receive payload String payload = consumerTemplate.receiveBody(\"paho:topic\", String.class); // Send payload String payload = \"message\"; producerTemplate.sendBody(\"paho:topic\", payload);", "from(\"paho:some/queue\") .to(\"mock:test\");", "from(\"direct:test\") .to(\"paho:some/target/queue\");", "from(\"paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883\") .to(\"mock:test\");", "from(\"direct:test\") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple(\"USD{header.customerId}\")) .to(\"paho:some/target/queue\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-paho-component-starter
5.99. gstreamer-plugins-base
5.99. gstreamer-plugins-base 5.99.1. RHEA-2012:1473 - gstreamer-plugins-base enhancement update Updated gstreamer-plugins-base packages thatadd one enhancement are now available for Red Hat Enterprise Linux 6. The gstreamer-plugins-base packages provide a collection of base plug-ins for the GStreamer streaming media framework. Enhancement BZ# 755777 This update adds color-matrix support for color conversions to the ffmpegcolorspace plugin. All users of gstreamer-plugins-base are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gstreamer-plugins-base
Chapter 1. Red Hat Decision Manager components
Chapter 1. Red Hat Decision Manager components The product is made up of Business Central and KIE Server. Business Central is the graphical user interface where you create and manage business rules. You can install Business Central in a Red Hat JBoss EAP instance or on the Red Hat OpenShift Container Platform (OpenShift). Business Central is also available as a standalone JAR file. You can use the Business Central standalone JAR file to run Business Central without deploying it to an application server. KIE Server is the server where rules and other artifacts are executed. It is used to instantiate and execute rules and solve planning problems. You can install KIE Server in a Red Hat JBoss EAP instance, in a Red Hat JBoss EAP cluster, on OpenShift, in an Oracle WebLogic server instance, in an IBM WebSphere Application Server instance, or as a part of Spring Boot application. You can configure KIE Server to run in managed or unmanaged mode. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). A KIE container is a specific version of a project. If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/components-con_execution-server
Chapter 1. Architectures
Chapter 1. Architectures Red Hat Enterprise Linux 7.2 is available as a single kit on the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ and POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM System z [4] [1] Note that the Red Hat Enterprise Linux 7.2 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.2 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.2 (big endian) is currently supported as a KVM guest on Red Hat Enterprise Virtualization for Power and on PowerVM. [3] Red Hat Enterprise Linux 7.2 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Virtualization for Power, on PowerVM and PowerNV (bare metal). [4] Note that Red Hat Enterprise Linux 7.2 supports IBM zEnterprise 196 hardware or later; IBM System z10 mainframe systems are no longer supported and will not boot Red Hat Enterprise Linux 7.2.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/chap-red_hat_enterprise_linux-7.2_release_notes-architectures
Chapter 2. File System Structure and Maintenance
Chapter 2. File System Structure and Maintenance The file system structure is the most basic level of organization in an operating system. The way an operating system interacts with its users, applications, and security model nearly always depends on how the operating system organizes files on storage devices. Providing a common file system structure ensures users and programs can access and write files. File systems break files down into two logical categories: Shareable and unsharable files Shareable files can be accessed locally and by remote hosts. Unsharable files are only available locally. Variable and static files Variable files, such as documents, can be changed at any time. Static files, such as binaries, do not change without an action from the system administrator. Categorizing files in this manner helps correlate the function of each file with the permissions assigned to the directories which hold them. How the operating system and its users interact with a file determines the directory in which it is placed, whether that directory is mounted with read-only or read and write permissions, and the level of access each user has to that file. The top level of this organization is crucial; access to the underlying directories can be restricted, otherwise security problems could arise if, from the top level down, access rules do not adhere to a rigid structure. 2.1. Overview of Filesystem Hierarchy Standard (FHS) Red Hat Enterprise Linux uses the Filesystem Hierarchy Standard ( FHS ) file system structure, which defines the names, locations, and permissions for many file types and directories. The FHS document is the authoritative reference to any FHS-compliant file system, but the standard leaves many areas undefined or extensible. This section is an overview of the standard and a description of the parts of the file system not covered by the standard. The two most important elements of FHS compliance are: Compatibility with other FHS-compliant systems The ability to mount a /usr/ partition as read-only. This is crucial, since /usr/ contains common executables and should not be changed by users. In addition, since /usr/ is mounted as read-only, it should be mountable from the CD-ROM drive or from another machine via a read-only NFS mount. 2.1.1. FHS Organization The directories and files noted here are a small subset of those specified by the FHS document. For the most complete information, see the latest FHS documentation at http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.pdf ; the file-hierarchy (7) man page also provides an overview. Note The directories that are available depend on what is installed on any given system. The following lists are only an example of what may be found. 2.1.1.1. Gathering File System Information df Command The df command reports the system's disk space usage. Its output looks similar to the following: Example 2.1. df Command Output By default, df shows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, use the command df -h . The -h argument stands for "human-readable" format. The output for df -h looks similar to the following: Example 2.2. df -h Command Output Note In the given examples, the mounted partition /dev/shm represents the system's virtual memory file system. du Command The du command displays the estimated amount of space being used by files in a directory, displaying the disk usage of each subdirectory. The last line in the output of du shows the total disk usage of the directory. To see only the total disk usage of a directory in human-readable format, use du -hs . For more options, see man du . Gnome System Monitor To view the system's partitions and disk space usage in a graphical format, use the Gnome System Monitor by clicking on Applications System Tools System Monitor or using the command gnome-system-monitor . Select the File Systems tab to view the system's partitions. The following figure illustrates the File Systems tab. Figure 2.1. File Systems Tab in GNOME System Monitor 2.1.1.2. The /boot/ Directory The /boot/ directory contains static files required to boot the system, for example, the Linux kernel. These files are essential for the system to boot properly. Warning Do not remove the /boot/ directory. Doing so renders the system unbootable. 2.1.1.3. The /dev/ Directory The /dev/ directory contains device nodes that represent the following device types: devices attached to the system; virtual devices provided by the kernel. These device nodes are essential for the system to function properly. The udevd daemon creates and removes device nodes in /dev/ as needed. Devices in the /dev/ directory and subdirectories are defined as either character (providing only a serial stream of input and output, for example, mouse or keyboard) or block (accessible randomly, such as a hard drive or a floppy drive). If GNOME or KDE is installed, some storage devices are automatically detected when connected (such as with USB) or inserted (such as a CD or DVD drive), and a pop-up window displaying the contents appears. Table 2.1. Examples of Common Files in the /dev Directory File Description /dev/hda The master device on the primary IDE channel. /dev/hdb The slave device on the primary IDE channel. /dev/tty0 The first virtual console. /dev/tty1 The second virtual console. /dev/sda The first device on the primary SCSI or SATA channel. /dev/lp0 The first parallel port. A valid block device can be one of two types of entries: Mapped device A logical volume in a volume group, for example, /dev/mapper/VolGroup00-LogVol02 . Static device A traditional storage volume, for example, /dev/ sdb X , where sdb is a storage device name and X is the partition number. /dev/sdb X can also be /dev/disk/by-id/ WWID , or /dev/disk/by-uuid/ UUID . For more information, see Section 25.8, "Persistent Naming" . 2.1.1.4. The /etc/ Directory The /etc/ directory is reserved for configuration files that are local to the machine. It should not contain any binaries; if there are any binaries, move them to /usr/bin/ or /usr/sbin/ . For example, the /etc/skel/ directory stores "skeleton" user files, which are used to populate a home directory when a user is first created. Applications also store their configuration files in this directory and may reference them when executed. The /etc/exports file controls which file systems export to remote hosts. 2.1.1.5. The /mnt/ Directory The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system mounts. For all removable storage media, use the /media/ directory. Automatically detected removable media is mounted in the /media directory. Important The /mnt directory must not be used by installation programs. 2.1.1.6. The /opt/ Directory The /opt/ directory is normally reserved for software and add-on packages that are not part of the default installation. A package that installs to /opt/ creates a directory bearing its name, for example, /opt/ packagename / . In most cases, such packages follow a predictable subdirectory structure; most store their binaries in /opt/ packagename /bin/ and their man pages in /opt/ packagename /man/ . 2.1.1.7. The /proc/ Directory The /proc/ directory contains special files that either extract information from the kernel or send information to it. Examples of such information include system memory, CPU information, and hardware configuration. For more information about /proc/ , see Section 2.3, "The /proc Virtual File System" . 2.1.1.8. The /srv/ Directory The /srv/ directory contains site-specific data served by a Red Hat Enterprise Linux system. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the /home/ directory. 2.1.1.9. The /sys/ Directory The /sys/ directory utilizes the new sysfs virtual file system specific to the kernel. With the increased support for hot plug hardware devices in the kernel, the /sys/ directory contains information similar to that held by /proc/ , but displays a hierarchical view of device information specific to hot plug devices. 2.1.1.10. The /usr/ Directory The /usr/ directory is for files that can be shared across multiple machines. The /usr/ directory is often on its own partition and is mounted read-only. At a minimum, /usr/ should contain the following subdirectories: /usr/bin This directory is used for binaries. /usr/etc This directory is used for system-wide configuration files. /usr/games This directory stores games. /usr/include This directory is used for C header files. /usr/kerberos This directory is used for Kerberos-related binaries and files. /usr/lib This directory is used for object files and libraries that are not designed to be directly utilized by shell scripts or users. As of Red Hat Enterprise Linux 7.0, the /lib/ directory has been merged with /usr/lib . Now it also contains libraries needed to execute the binaries in /usr/bin/ and /usr/sbin/ . These shared library images are used to boot the system or execute commands within the root file system. /usr/libexec This directory contains small helper programs called by other programs. /usr/sbin As of Red Hat Enterprise Linux 7.0, /sbin has been moved to /usr/sbin . This means that it contains all system administration binaries, including those essential for booting, restoring, recovering, or repairing the system. The binaries in /usr/sbin/ require root privileges to use. /usr/share This directory stores files that are not architecture-specific. /usr/src This directory stores source code. /usr/tmp linked to /var/tmp This directory stores temporary files. The /usr/ directory should also contain a /local/ subdirectory. As per the FHS, this subdirectory is used by the system administrator when installing software locally, and should be safe from being overwritten during system updates. The /usr/local directory has a structure similar to /usr/ , and contains the following subdirectories: /usr/local/bin /usr/local/etc /usr/local/games /usr/local/include /usr/local/lib /usr/local/libexec /usr/local/sbin /usr/local/share /usr/local/src Red Hat Enterprise Linux's usage of /usr/local/ differs slightly from the FHS. The FHS states that /usr/local/ should be used to store software that should remain safe from system software upgrades. Since the RPM Package Manager can perform software upgrades safely, it is not necessary to protect files by storing them in /usr/local/ . Instead, Red Hat Enterprise Linux uses /usr/local/ for software local to the machine. For instance, if the /usr/ directory is mounted as a read-only NFS share from a remote host, it is still possible to install a package or program under the /usr/local/ directory. 2.1.1.11. The /var/ Directory Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files or need spool/ or lock/ directories should write them to the /var/ directory. The FHS states /var/ is for variable data, which includes spool directories and files, logging data, transient and temporary files. Following are some of the directories found within the /var/ directory: /var/account/ /var/arpwatch/ /var/cache/ /var/crash/ /var/db/ /var/empty/ /var/ftp/ /var/gdm/ /var/kerberos/ /var/lib/ /var/local/ /var/lock/ /var/log/ /var/mail linked to /var/spool/mail/ /var/mailman/ /var/named/ /var/nis/ /var/opt/ /var/preserve/ /var/run/ /var/spool/ /var/tmp/ /var/tux/ /var/www/ /var/yp/ Important The /var/run/media/ user directory contains subdirectories used as mount points for removable media such as USB storage media, DVDs, CD-ROMs, and Zip disks. Note that previously, the /media/ directory was used for this purpose. System log files, such as messages and lastlog , go in the /var/log/ directory. The /var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/ directory, usually in directories for the program using the file. The /var/spool/ directory has subdirectories that store data files for some programs. These subdirectories include: /var/spool/at/ /var/spool/clientmqueue/ /var/spool/cron/ /var/spool/cups/ /var/spool/exim/ /var/spool/lpd/ /var/spool/mail/ /var/spool/mailman/ /var/spool/mqueue/ /var/spool/news/ /var/spool/postfix/ /var/spool/repackage/ /var/spool/rwho/ /var/spool/samba/ /var/spool/squid/ /var/spool/squirrelmail/ /var/spool/up2date/ /var/spool/uucp/ /var/spool/uucppublic/ /var/spool/vbox/
[ "Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 11675568 6272120 4810348 57% / /dev/sda1 100691 9281 86211 10% /boot none 322856 0 322856 0% /dev/shm", "Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 12G 6.0G 4.6G 57% / /dev/sda1 99M 9.1M 85M 10% /boot none 316M 0 316M 0% /dev/shm" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-filesystem
Chapter 63. Salesforce Update Sink
Chapter 63. Salesforce Update Sink Updates an object in Salesforce. The body received must contain a JSON key-value pair for each property to update and sObjectName and sObjectId must be provided as parameters. Example of key-value pair: { "Phone": "1234567890", "Name": "Antonia" } 63.1. Configuration Options The following table summarizes the configuration options available for the salesforce-update-sink Kamelet: Property Name Description Type Default Example clientId * Consumer Key The Salesforce application consumer key string clientSecret * Consumer Secret The Salesforce application consumer secret string password * Password The Salesforce user password string sObjectId * Object Id Id of the object. Only required if using key-value pair. string sObjectName * Object Name Type of the object. Only required if using key-value pair. string "Contact" userName * Username The Salesforce username string loginUrl Login URL The Salesforce instance login URL string "https://login.salesforce.com" Note Fields marked with an asterisk (*) are mandatory. 63.2. Dependencies At runtime, the salesforce-update-sink Kamelet relies upon the presence of the following dependencies: camel:salesforce camel:kamelet 63.3. Usage This section describes how you can use the salesforce-update-sink . 63.3.1. Knative Sink You can use the salesforce-update-sink Kamelet as a Knative sink by binding it to a Knative object. salesforce-update-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-update-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-update-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" sObjectId: "The Object Id" sObjectName: "Contact" userName: "The Username" 63.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 63.3.1.2. Procedure for using the cluster CLI Save the salesforce-update-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-update-sink-binding.yaml 63.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 63.3.2. Kafka Sink You can use the salesforce-update-sink Kamelet as a Kafka sink by binding it to a Kafka topic. salesforce-update-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-update-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-update-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" sObjectId: "The Object Id" sObjectName: "Contact" userName: "The Username" 63.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 63.3.2.2. Procedure for using the cluster CLI Save the salesforce-update-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-update-sink-binding.yaml 63.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 63.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/salesforce-update-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-update-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-update-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" sObjectId: \"The Object Id\" sObjectName: \"Contact\" userName: \"The Username\"", "apply -f salesforce-update-sink-binding.yaml", "kamel bind channel:mychannel salesforce-update-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.sObjectId=The Object Id\" -p \"sink.sObjectName=Contact\" -p \"sink.userName=The Username\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-update-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-update-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" sObjectId: \"The Object Id\" sObjectName: \"Contact\" userName: \"The Username\"", "apply -f salesforce-update-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-update-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.sObjectId=The Object Id\" -p \"sink.sObjectName=Contact\" -p \"sink.userName=The Username\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/salesforce-sink-update
Chapter 1. Red Hat Quay overview
Chapter 1. Red Hat Quay overview Red Hat Quay is a distributed and highly available container image registry for your enterprise. Red Hat Quay container registry platform provides secure storage, distribution, access controls, geo-replications, repository mirroring, and governance of containers and cloud-native artifacts on any infrastructure. It is available as a standalone component or as an Operator for OpenShift Container Platform, and is deployable on-prem or on a public cloud. This guide provides an insight into architectural patterns to use when deploying Red Hat Quay. This guide also offers sizing guidance and deployment prerequisites, along with best practices for ensuring high availability for your Red Hat Quay registry. 1.1. Scalability and high availability (HA) The code base used for Red Hat Quay is the same as the code base used for Quay.io , which is the highly available container image registry hosted by Red Hat. Quay.io and Red Hat Quay offer a multitenant SaaS solution. As a result, users can be confident that their deployment can deliver at scale with high availability, whether their deployment is on-prem or on a public cloud. 1.2. Content distribution Content distribution features in Red Hat Quay include the following: Repository mirroring Red Hat Quay repository mirroring lets you mirror images from Red Hat Quay and other container registries, like JFrog Artifactory, Harbor, or Sonatype Nexus Repository, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags. Geo-replication Red Hat Quay geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirection for clients. Deployment in disconnected or air-gapped environments Red Hat Quay is deployable in a disconnected environment in one of two ways: Red Hat Quay and Clair connected to the internet, with an air-gapped OpenShift Container Platform cluster accessing the Red Hat Quay registry through an explicit, allowlisted hole in the firewall. Using two independent Red Hat Quay and Clair installations. One installation is connected to the internet and another within a disconnected, or firewalled, environment. Image and vulnerability data is manually transferred from the connected environment to the disconnected environment using offline media. 1.3. Build automation Red Hat Quay supports building Dockerfiles using a set of worker nodes on OpenShift Container Platform or Kubernetes platforms. Build triggers, such as GitHub webhooks, can be configured to automatically build new versions of your repositories when new code is committed. Prior to Red Hat Quay 3.7, Red Hat Quay ran Podman commands in virtual machines launched by pods. Running builds on virtual platforms requires enabling nested virtualization, which is not featured in Red Hat Enterprise Linux (RHEL) or OpenShift Container Platform. As a result, builds had to run on bare metal clusters, which is an inefficient use of resources. With Red Hat Quay 3.7, this requirement was removed and builds could be run on OpenShift Container Platform clusters running on virtualized or bare metal platforms. 1.4. Red Hat Quay enhanced build architecture The following image shows the expected design flow and architecture of the enhanced build features: With this enhancement, the build manager first creates the Job Object . Then, the Job Object then creates a pod using the quay-builder-image . The quay-builder-image will contain the quay-builder binary and the Podman service. The created pod runs as unprivileged . The quay-builder binary then builds the image while communicating status and retrieving build information from the Build Manager. 1.5. Integration Red Hat Quay can integrate with almost all Git-compatible systems. Red Hat Quay offers automative configuration for GitHub, GitLab, or BitBucket, which allows users to continuously build and serve their containerized software. 1.5.1. REST API Red Hat Quay provides a full OAuth 2, RESTful API. RESTful API offers the following benefits: Availability from endpoints of each Red Hat Quay instance from the URL, for example, https://quay-server.example.com/api/v1 Allow users to connect to endpoints through a browser, to GET , DELETE , POST , and PUT Red Hat Quay settings provided by a discovery endpoint that is usable by Swagger. The API can be invoked by the URL, for example, https://quay-server.example.com/api/v1 , and uses JSON objects as payload. 1.6. Security Red Hat Quay is built for real enterprise use cases where content governance and security are two major focus areas. Red Hat Quay content governance and security includes built-in vulnerability scanning through Clair. 1.6.1. TLS/SSL configuration You can configure SSL/TLS for the Red Hat Quay registry in the configuration tool UI or in the configuration bundle. SSL/TSL connections to the database, to image storage, and to Redis can also be specified through the configuration tool. Sensitive fields in the database and at run time are automatically encrypted. You can also require HTTPS and verify certificates for the Red Hat Quay registry during mirror operations. 1.6.2. Clair Clair is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. 1.6.3. Red Hat Quay Operator security When Red Hat Quay is deployed using the Red Hat Quay Operator, the tls component is set to managed by default and the OpenShift Container Platform's Certificate Authority is used to create HTTPS endpoints and to rotate TLS certificates. If you set the tls component to unmanaged , you can provide custom certificates to the pass-through Routes, however you are responsible for certificate rotation. 1.6.4. Fully isolated builds Red Hat Quay now supports building Dockerfiles that uses both bare metal and virtual builders. By using bare-metal worker nodes, each build is done in an ephemeral virtual machine to ensure isolation and security while the build is running. This provides the best protection against rogue payloads. Running builds directly in a container does not have the same isolation as when using virtual machines, but it still provides good protection. 1.6.5. Role-based access controls Red Hat Quay provides full isolation of registry content by organization and team with fine-grained entitlements for read, write, and administrative access by users and automated tools. 1.7. Recently added features See the Red Hat Quay Release Notes for information about the latest features, enhancements, deprecations, and known issues.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_architecture/arch-intro
5.2.17. /proc/locks
5.2.17. /proc/locks This file displays the files currently locked by the kernel. The contents of this file contain internal kernel debugging data and can vary tremendously, depending on the use of the system. A sample /proc/locks file for a lightly loaded system looks similar to the following: Each lock has its own line which starts with a unique number. The second column refers to the class of lock used, with FLOCK signifying the older-style UNIX file locks from a flock system call and POSIX representing the newer POSIX locks from the lockf system call. The third column can have two values: ADVISORY or MANDATORY . ADVISORY means that the lock does not prevent other people from accessing the data; it only prevents other attempts to lock it. MANDATORY means that no other access to the data is permitted while the lock is held. The fourth column reveals whether the lock is allowing the holder READ or WRITE access to the file. The fifth column shows the ID of the process holding the lock. The sixth column shows the ID of the file being locked, in the format of MAJOR-DEVICE : MINOR-DEVICE : INODE-NUMBER . The seventh and eighth column shows the start and end of the file's locked region.
[ "1: POSIX ADVISORY WRITE 3568 fd:00:2531452 0 EOF 2: FLOCK ADVISORY WRITE 3517 fd:00:2531448 0 EOF 3: POSIX ADVISORY WRITE 3452 fd:00:2531442 0 EOF 4: POSIX ADVISORY WRITE 3443 fd:00:2531440 0 EOF 5: POSIX ADVISORY WRITE 3326 fd:00:2531430 0 EOF 6: POSIX ADVISORY WRITE 3175 fd:00:2531425 0 EOF 7: POSIX ADVISORY WRITE 3056 fd:00:2548663 0 EOF" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-locks
Chapter 1. Validating an installation
Chapter 1. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. /validating-an-installation.adoc 1.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 1.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: /validating-an-installation.adoc 1.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster using the web console for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. /validating-an-installation.adoc 1.4. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m /validating-an-installation.adoc 1.5. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.27.3 control-plane-1.example.com Ready master 41m v1.27.3 control-plane-2.example.com Ready master 45m v1.27.3 compute-2.example.com Ready worker 38m v1.27.3 compute-3.example.com Ready worker 33m v1.27.3 control-plane-3.example.com Ready master 41m v1.27.3 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. /validating-an-installation.adoc 1.6. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . /validating-an-installation.adoc 1.7. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You are logged in to OpenShift Cluster Manager . You have access to the cluster as a user with the cluster-admin role. Procedure Go to the Clusters list in OpenShift Cluster Manager and locate your OpenShift Container Platform cluster. Click the Overview tab for your cluster. Review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. /validating-an-installation.adoc 1.8. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 1.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. /validating-an-installation.adoc 1.9. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts as an Administrator for further details about alerting in OpenShift Container Platform. 1.10. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster .
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.27.3 control-plane-1.example.com Ready master 41m v1.27.3 control-plane-2.example.com Ready master 45m v1.27.3 compute-2.example.com Ready worker 38m v1.27.3 compute-3.example.com Ready worker 33m v1.27.3 control-plane-3.example.com Ready master 41m v1.27.3", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/validation_and_troubleshooting/validating-an-installation
Chapter 19. API reference
Chapter 19. API reference 19.1. 5.6 Logging API reference 19.1.1. Logging 5.6 API reference 19.1.1.1. ClusterLogForwarder ClusterLogForwarder is an API to configure forwarding logs. You configure forwarding by specifying a list of pipelines , which forward from a set of named inputs to a set of named outputs. There are built-in input names for common log categories, and you can define custom inputs to do additional filtering. There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster. For more details see the documentation on the API fields. Property Type Description spec object Specification of the desired behavior of ClusterLogForwarder status object Status of the ClusterLogForwarder 19.1.1.1.1. .spec 19.1.1.1.1.1. Description ClusterLogForwarderSpec defines how logs should be forwarded to remote targets. 19.1.1.1.1.1.1. Type object Property Type Description inputs array (optional) Inputs are named filters for log messages to be forwarded. outputDefaults object (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. outputs array (optional) Outputs are named destinations for log messages. pipelines array Pipelines forward the messages selected by a set of inputs to a set of outputs. 19.1.1.1.2. .spec.inputs[] 19.1.1.1.2.1. Description InputSpec defines a selector of log messages. 19.1.1.1.2.1.1. Type array Property Type Description application object (optional) Application, if present, enables named set of application logs that name string Name used to refer to the input of a pipeline . 19.1.1.1.3. .spec.inputs[].application 19.1.1.1.3.1. Description Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs. 19.1.1.1.3.1.1. Type object Property Type Description namespaces array (optional) Namespaces from which to collect application logs. selector object (optional) Selector for logs from pods with matching labels. 19.1.1.1.4. .spec.inputs[].application.namespaces[] 19.1.1.1.4.1. Description 19.1.1.1.4.1.1. Type array 19.1.1.1.5. .spec.inputs[].application.selector 19.1.1.1.5.1. Description A label selector is a label query over a set of resources. 19.1.1.1.5.1.1. Type object Property Type Description matchLabels object (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels 19.1.1.1.6. .spec.inputs[].application.selector.matchLabels 19.1.1.1.6.1. Description 19.1.1.1.6.1.1. Type object 19.1.1.1.7. .spec.outputDefaults 19.1.1.1.7.1. Description 19.1.1.1.7.1.1. Type object Property Type Description elasticsearch object (optional) Elasticsearch OutputSpec default values 19.1.1.1.8. .spec.outputDefaults.elasticsearch 19.1.1.1.8.1. Description ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index 19.1.1.1.8.1.1. Type object Property Type Description enableStructuredContainerLogs bool (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow structuredTypeKey string (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index structuredTypeName string (optional) StructuredTypeName specifies the name of elasticsearch schema 19.1.1.1.9. .spec.outputs[] 19.1.1.1.9.1. Description Output defines a destination for log messages. 19.1.1.1.9.1.1. Type array Property Type Description syslog object (optional) fluentdForward object (optional) elasticsearch object (optional) kafka object (optional) cloudwatch object (optional) loki object (optional) googleCloudLogging object (optional) splunk object (optional) name string Name used to refer to the output from a pipeline . secret object (optional) Secret for authentication. tls object TLS contains settings for controlling options on TLS client connections. type string Type of output plugin. url string (optional) URL to send log records to. 19.1.1.1.10. .spec.outputs[].secret 19.1.1.1.10.1. Description OutputSecretSpec is a secret reference containing name only, no namespace. 19.1.1.1.10.1.1. Type object Property Type Description name string Name of a secret in the namespace configured for log forwarder secrets. 19.1.1.1.11. .spec.outputs[].tls 19.1.1.1.11.1. Description OutputTLSSpec contains options for TLS connections that are agnostic to the output type. 19.1.1.1.11.1.1. Type object Property Type Description insecureSkipVerify bool If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. 19.1.1.1.12. .spec.pipelines[] 19.1.1.1.12.1. Description PipelinesSpec link a set of inputs to a set of outputs. 19.1.1.1.12.1.1. Type array Property Type Description detectMultilineErrors bool (optional) DetectMultilineErrors enables multiline error detection of container logs inputRefs array InputRefs lists the names ( input.name ) of inputs to this pipeline. labels object (optional) Labels applied to log records passing through this pipeline. name string (optional) Name is optional, but must be unique in the pipelines list if provided. outputRefs array OutputRefs lists the names ( output.name ) of outputs from this pipeline. parse string (optional) Parse enables parsing of log entries into structured logs 19.1.1.1.13. .spec.pipelines[].inputRefs[] 19.1.1.1.13.1. Description 19.1.1.1.13.1.1. Type array 19.1.1.1.14. .spec.pipelines[].labels 19.1.1.1.14.1. Description 19.1.1.1.14.1.1. Type object 19.1.1.1.15. .spec.pipelines[].outputRefs[] 19.1.1.1.15.1. Description 19.1.1.1.15.1.1. Type array 19.1.1.1.16. .status 19.1.1.1.16.1. Description ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder 19.1.1.1.16.1.1. Type object Property Type Description conditions object Conditions of the log forwarder. inputs Conditions Inputs maps input name to condition of the input. outputs Conditions Outputs maps output name to condition of the output. pipelines Conditions Pipelines maps pipeline name to condition of the pipeline. 19.1.1.1.17. .status.conditions 19.1.1.1.17.1. Description 19.1.1.1.17.1.1. Type object 19.1.1.1.18. .status.inputs 19.1.1.1.18.1. Description 19.1.1.1.18.1.1. Type Conditions 19.1.1.1.19. .status.outputs 19.1.1.1.19.1. Description 19.1.1.1.19.1.1. Type Conditions 19.1.1.1.20. .status.pipelines 19.1.1.1.20.1. Description 19.1.1.1.20.1.1. Type Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API Property Type Description spec object Specification of the desired behavior of ClusterLogging status object Status defines the observed state of ClusterLogging 19.1.1.1.21. .spec 19.1.1.1.21.1. Description ClusterLoggingSpec defines the desired state of ClusterLogging 19.1.1.1.21.1.1. Type object Property Type Description collection object Specification of the Collection component for the cluster curation object (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster forwarder object (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster logStore object (optional) Specification of the Log Storage component for the cluster managementState string (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator visualization object (optional) Specification of the Visualization component for the cluster 19.1.1.1.22. .spec.collection 19.1.1.1.22.1. Description This is the struct that will contain information pertinent to Log and event collection 19.1.1.1.22.1.1. Type object Property Type Description resources object (optional) The resource requirements for the collector nodeSelector object (optional) Define which Nodes the Pods are scheduled on. tolerations array (optional) Define the tolerations the Pods will accept fluentd object (optional) Fluentd represents the configuration for forwarders of type fluentd. logs object (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster type string (optional) The type of Log Collection to configure 19.1.1.1.23. .spec.collection.fluentd 19.1.1.1.23.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 19.1.1.1.23.1.1. Type object Property Type Description buffer object inFile object 19.1.1.1.24. .spec.collection.fluentd.buffer 19.1.1.1.24.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 19.1.1.1.24.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount represents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 19.1.1.1.25. .spec.collection.fluentd.inFile 19.1.1.1.25.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 19.1.1.1.25.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 19.1.1.1.26. .spec.collection.logs 19.1.1.1.26.1. Description 19.1.1.1.26.1.1. Type object Property Type Description fluentd object Specification of the Fluentd Log Collection component type string The type of Log Collection to configure 19.1.1.1.27. .spec.collection.logs.fluentd 19.1.1.1.27.1. Description CollectorSpec is spec to define scheduling and resources for a collector 19.1.1.1.27.1.1. Type object Property Type Description nodeSelector object (optional) Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for the collector tolerations array (optional) Define the tolerations the Pods will accept 19.1.1.1.28. .spec.collection.logs.fluentd.nodeSelector 19.1.1.1.28.1. Description 19.1.1.1.28.1.1. Type object 19.1.1.1.29. .spec.collection.logs.fluentd.resources 19.1.1.1.29.1. Description 19.1.1.1.29.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.30. .spec.collection.logs.fluentd.resources.limits 19.1.1.1.30.1. Description 19.1.1.1.30.1.1. Type object 19.1.1.1.31. .spec.collection.logs.fluentd.resources.requests 19.1.1.1.31.1. Description 19.1.1.1.31.1.1. Type object 19.1.1.1.32. .spec.collection.logs.fluentd.tolerations[] 19.1.1.1.32.1. Description 19.1.1.1.32.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds 19.1.1.1.33.1. Description 19.1.1.1.33.1.1. Type int 19.1.1.1.34. .spec.curation 19.1.1.1.34.1. Description This is the struct that will contain information pertinent to Log curation (Curator) 19.1.1.1.34.1.1. Type object Property Type Description curator object The specification of curation to configure type string The kind of curation to configure 19.1.1.1.35. .spec.curation.curator 19.1.1.1.35.1. Description 19.1.1.1.35.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for Curator schedule string The cron schedule that the Curator job is run. Defaults to "30 3 * * *" tolerations array 19.1.1.1.36. .spec.curation.curator.nodeSelector 19.1.1.1.36.1. Description 19.1.1.1.36.1.1. Type object 19.1.1.1.37. .spec.curation.curator.resources 19.1.1.1.37.1. Description 19.1.1.1.37.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.38. .spec.curation.curator.resources.limits 19.1.1.1.38.1. Description 19.1.1.1.38.1.1. Type object 19.1.1.1.39. .spec.curation.curator.resources.requests 19.1.1.1.39.1. Description 19.1.1.1.39.1.1. Type object 19.1.1.1.40. .spec.curation.curator.tolerations[] 19.1.1.1.40.1. Description 19.1.1.1.40.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.41. .spec.curation.curator.tolerations[].tolerationSeconds 19.1.1.1.41.1. Description 19.1.1.1.41.1.1. Type int 19.1.1.1.42. .spec.forwarder 19.1.1.1.42.1. Description ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd . 19.1.1.1.42.1.1. Type object Property Type Description fluentd object 19.1.1.1.43. .spec.forwarder.fluentd 19.1.1.1.43.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 19.1.1.1.43.1.1. Type object Property Type Description buffer object inFile object 19.1.1.1.44. .spec.forwarder.fluentd.buffer 19.1.1.1.44.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 19.1.1.1.44.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 19.1.1.1.45. .spec.forwarder.fluentd.inFile 19.1.1.1.45.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 19.1.1.1.45.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 19.1.1.1.46. .spec.logStore 19.1.1.1.46.1. Description The LogStoreSpec contains information about how logs are stored. 19.1.1.1.46.1.1. Type object Property Type Description elasticsearch object Specification of the Elasticsearch Log Store component lokistack object LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. retentionPolicy object (optional) Retention policy defines the maximum age for an index after which it should be deleted type string The Type of Log Storage to configure. The operator currently supports either using ElasticSearch 19.1.1.1.47. .spec.logStore.elasticsearch 19.1.1.1.47.1. Description 19.1.1.1.47.1.1. Type object Property Type Description nodeCount int Number of nodes to deploy for Elasticsearch nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Elasticsearch Proxy component redundancyPolicy string (optional) resources object (optional) The resource requirements for Elasticsearch storage object (optional) The storage specification for Elasticsearch data nodes tolerations array 19.1.1.1.48. .spec.logStore.elasticsearch.nodeSelector 19.1.1.1.48.1. Description 19.1.1.1.48.1.1. Type object 19.1.1.1.49. .spec.logStore.elasticsearch.proxy 19.1.1.1.49.1. Description 19.1.1.1.49.1.1. Type object Property Type Description resources object 19.1.1.1.50. .spec.logStore.elasticsearch.proxy.resources 19.1.1.1.50.1. Description 19.1.1.1.50.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.51. .spec.logStore.elasticsearch.proxy.resources.limits 19.1.1.1.51.1. Description 19.1.1.1.51.1.1. Type object 19.1.1.1.52. .spec.logStore.elasticsearch.proxy.resources.requests 19.1.1.1.52.1. Description 19.1.1.1.52.1.1. Type object 19.1.1.1.53. .spec.logStore.elasticsearch.resources 19.1.1.1.53.1. Description 19.1.1.1.53.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.54. .spec.logStore.elasticsearch.resources.limits 19.1.1.1.54.1. Description 19.1.1.1.54.1.1. Type object 19.1.1.1.55. .spec.logStore.elasticsearch.resources.requests 19.1.1.1.55.1. Description 19.1.1.1.55.1.1. Type object 19.1.1.1.56. .spec.logStore.elasticsearch.storage 19.1.1.1.56.1. Description 19.1.1.1.56.1.1. Type object Property Type Description size object The max storage capacity for the node to provision. storageClassName string (optional) The name of the storage class to use with creating the node's PVC. 19.1.1.1.57. .spec.logStore.elasticsearch.storage.size 19.1.1.1.57.1. Description 19.1.1.1.57.1.1. Type object Property Type Description Format string Change Format at will. See the comment for Canonicalize for d object d is the quantity in inf.Dec form if d.Dec != nil i int i is the quantity in int64 scaled form, if d.Dec == nil s string s is the generated value of this quantity to avoid recalculation 19.1.1.1.58. .spec.logStore.elasticsearch.storage.size.d 19.1.1.1.58.1. Description 19.1.1.1.58.1.1. Type object Property Type Description Dec object 19.1.1.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec 19.1.1.1.59.1. Description 19.1.1.1.59.1.1. Type object Property Type Description scale int unscaled object 19.1.1.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled 19.1.1.1.60.1. Description 19.1.1.1.60.1.1. Type object Property Type Description abs Word sign neg bool 19.1.1.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs 19.1.1.1.61.1. Description 19.1.1.1.61.1.1. Type Word 19.1.1.1.62. .spec.logStore.elasticsearch.storage.size.i 19.1.1.1.62.1. Description 19.1.1.1.62.1.1. Type int Property Type Description scale int value int 19.1.1.1.63. .spec.logStore.elasticsearch.tolerations[] 19.1.1.1.63.1. Description 19.1.1.1.63.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds 19.1.1.1.64.1. Description 19.1.1.1.64.1.1. Type int 19.1.1.1.65. .spec.logStore.lokistack 19.1.1.1.65.1. Description LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace. 19.1.1.1.65.1.1. Type object Property Type Description name string Name of the LokiStack resource. 19.1.1.1.66. .spec.logStore.retentionPolicy 19.1.1.1.66.1. Description 19.1.1.1.66.1.1. Type object Property Type Description application object audit object infra object 19.1.1.1.67. .spec.logStore.retentionPolicy.application 19.1.1.1.67.1. Description 19.1.1.1.67.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[] 19.1.1.1.68.1. Description 19.1.1.1.68.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.69. .spec.logStore.retentionPolicy.audit 19.1.1.1.69.1. Description 19.1.1.1.69.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[] 19.1.1.1.70.1. Description 19.1.1.1.70.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.71. .spec.logStore.retentionPolicy.infra 19.1.1.1.71.1. Description 19.1.1.1.71.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[] 19.1.1.1.72.1. Description 19.1.1.1.72.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.73. .spec.visualization 19.1.1.1.73.1. Description This is the struct that will contain information pertinent to Log visualization (Kibana) 19.1.1.1.73.1.1. Type object Property Type Description kibana object Specification of the Kibana Visualization component type string The type of Visualization to configure 19.1.1.1.74. .spec.visualization.kibana 19.1.1.1.74.1. Description 19.1.1.1.74.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Kibana Proxy component replicas int Number of instances to deploy for a Kibana deployment resources object (optional) The resource requirements for Kibana tolerations array 19.1.1.1.75. .spec.visualization.kibana.nodeSelector 19.1.1.1.75.1. Description 19.1.1.1.75.1.1. Type object 19.1.1.1.76. .spec.visualization.kibana.proxy 19.1.1.1.76.1. Description 19.1.1.1.76.1.1. Type object Property Type Description resources object 19.1.1.1.77. .spec.visualization.kibana.proxy.resources 19.1.1.1.77.1. Description 19.1.1.1.77.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.78. .spec.visualization.kibana.proxy.resources.limits 19.1.1.1.78.1. Description 19.1.1.1.78.1.1. Type object 19.1.1.1.79. .spec.visualization.kibana.proxy.resources.requests 19.1.1.1.79.1. Description 19.1.1.1.79.1.1. Type object 19.1.1.1.80. .spec.visualization.kibana.replicas 19.1.1.1.80.1. Description 19.1.1.1.80.1.1. Type int 19.1.1.1.81. .spec.visualization.kibana.resources 19.1.1.1.81.1. Description 19.1.1.1.81.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.82. .spec.visualization.kibana.resources.limits 19.1.1.1.82.1. Description 19.1.1.1.82.1.1. Type object 19.1.1.1.83. .spec.visualization.kibana.resources.requests 19.1.1.1.83.1. Description 19.1.1.1.83.1.1. Type object 19.1.1.1.84. .spec.visualization.kibana.tolerations[] 19.1.1.1.84.1. Description 19.1.1.1.84.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds 19.1.1.1.85.1. Description 19.1.1.1.85.1.1. Type int 19.1.1.1.86. .status 19.1.1.1.86.1. Description ClusterLoggingStatus defines the observed state of ClusterLogging 19.1.1.1.86.1.1. Type object Property Type Description collection object (optional) conditions object (optional) curation object (optional) logStore object (optional) visualization object (optional) 19.1.1.1.87. .status.collection 19.1.1.1.87.1. Description 19.1.1.1.87.1.1. Type object Property Type Description logs object (optional) 19.1.1.1.88. .status.collection.logs 19.1.1.1.88.1. Description 19.1.1.1.88.1.1. Type object Property Type Description fluentdStatus object (optional) 19.1.1.1.89. .status.collection.logs.fluentdStatus 19.1.1.1.89.1. Description 19.1.1.1.89.1.1. Type object Property Type Description clusterCondition object (optional) daemonSet string (optional) nodes object (optional) pods string (optional) 19.1.1.1.90. .status.collection.logs.fluentdStatus.clusterCondition 19.1.1.1.90.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 19.1.1.1.90.1.1. Type object 19.1.1.1.91. .status.collection.logs.fluentdStatus.nodes 19.1.1.1.91.1. Description 19.1.1.1.91.1.1. Type object 19.1.1.1.92. .status.conditions 19.1.1.1.92.1. Description 19.1.1.1.92.1.1. Type object 19.1.1.1.93. .status.curation 19.1.1.1.93.1. Description 19.1.1.1.93.1.1. Type object Property Type Description curatorStatus array (optional) 19.1.1.1.94. .status.curation.curatorStatus[] 19.1.1.1.94.1. Description 19.1.1.1.94.1.1. Type array Property Type Description clusterCondition object (optional) cronJobs string (optional) schedules string (optional) suspended bool (optional) 19.1.1.1.95. .status.curation.curatorStatus[].clusterCondition 19.1.1.1.95.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 19.1.1.1.95.1.1. Type object 19.1.1.1.96. .status.logStore 19.1.1.1.96.1. Description 19.1.1.1.96.1.1. Type object Property Type Description elasticsearchStatus array (optional) 19.1.1.1.97. .status.logStore.elasticsearchStatus[] 19.1.1.1.97.1. Description 19.1.1.1.97.1.1. Type array Property Type Description cluster object (optional) clusterConditions object (optional) clusterHealth string (optional) clusterName string (optional) deployments array (optional) nodeConditions object (optional) nodeCount int (optional) pods object (optional) replicaSets array (optional) shardAllocationEnabled string (optional) statefulSets array (optional) 19.1.1.1.98. .status.logStore.elasticsearchStatus[].cluster 19.1.1.1.98.1. Description 19.1.1.1.98.1.1. Type object Property Type Description activePrimaryShards int The number of Active Primary Shards for the Elasticsearch Cluster activeShards int The number of Active Shards for the Elasticsearch Cluster initializingShards int The number of Initializing Shards for the Elasticsearch Cluster numDataNodes int The number of Data Nodes for the Elasticsearch Cluster numNodes int The number of Nodes for the Elasticsearch Cluster pendingTasks int relocatingShards int The number of Relocating Shards for the Elasticsearch Cluster status string The current Status of the Elasticsearch Cluster unassignedShards int The number of Unassigned Shards for the Elasticsearch Cluster 19.1.1.1.99. .status.logStore.elasticsearchStatus[].clusterConditions 19.1.1.1.99.1. Description 19.1.1.1.99.1.1. Type object 19.1.1.1.100. .status.logStore.elasticsearchStatus[].deployments[] 19.1.1.1.100.1. Description 19.1.1.1.100.1.1. Type array 19.1.1.1.101. .status.logStore.elasticsearchStatus[].nodeConditions 19.1.1.1.101.1. Description 19.1.1.1.101.1.1. Type object 19.1.1.1.102. .status.logStore.elasticsearchStatus[].pods 19.1.1.1.102.1. Description 19.1.1.1.102.1.1. Type object 19.1.1.1.103. .status.logStore.elasticsearchStatus[].replicaSets[] 19.1.1.1.103.1. Description 19.1.1.1.103.1.1. Type array 19.1.1.1.104. .status.logStore.elasticsearchStatus[].statefulSets[] 19.1.1.1.104.1. Description 19.1.1.1.104.1.1. Type array 19.1.1.1.105. .status.visualization 19.1.1.1.105.1. Description 19.1.1.1.105.1.1. Type object Property Type Description kibanaStatus array (optional) 19.1.1.1.106. .status.visualization.kibanaStatus[] 19.1.1.1.106.1. Description 19.1.1.1.106.1.1. Type array Property Type Description clusterCondition object (optional) deployment string (optional) pods string (optional) The status for each of the Kibana pods for the Visualization component replicaSets array (optional) replicas int (optional) 19.1.1.1.107. .status.visualization.kibanaStatus[].clusterCondition 19.1.1.1.107.1. Description 19.1.1.1.107.1.1. Type object 19.1.1.1.108. .status.visualization.kibanaStatus[].replicaSets[] 19.1.1.1.108.1. Description 19.1.1.1.108.1.1. Type array
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/logging/api-reference
Sandboxed Containers Support for OpenShift
Sandboxed Containers Support for OpenShift OpenShift Container Platform 4.11 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/sandboxed_containers_support_for_openshift/index
Chapter 2. Onboarding certification partners
Chapter 2. Onboarding certification partners Use the Red Hat Partner Connect Portal to create a new account if you are a new partner, or use your existing Red Hat account if you are a current partner to onboard with Red Hat for certifying your products. 2.1. Onboarding existing certification partners As an existing partner you could be: A member of the one-to-many Ecosystem Partner Management (EPM) program who has some degree of representation on the EPM team, but does not have any assistance with the Red Hat Certified Cloud and Service Provider (CCSP)certification process. OR A member fully managed by the EPM team in the traditional manner with a dedicated EPM team member who is assigned to manage the partner, including questions about the CCSP certification requests. Note If you think your company has an existing Red Hat account but are not sure who is the Organization Administrator for your company, email [email protected] to add you to your company's existing account. Prerequisites You have an existing Red Hat account. Procedure Access Red Hat Customer Portal and click Log in . If you do not have an active membership for the Certified Cloud and Service Provider (CCSP) Program, visit Red Hat CCSP program to learn more about it and to join the program. Contact the connect team for more information. If you already have an active membership for the Certified Cloud and Service Provider (CCSP) Program, from the main menu, click Log in . Enter your Red Hat login or email address and click . Then, use either of the following options: Log in with company single sign-on Log in with Red Hat account After you have an active membership in the CCSP program and an SSO account, the account must be entitled with certification privileges. To do this, open a case and include the following information in the Problem Statement field: Problem Statement: Partner Certification: CCSP Certification Access for {Red Hat SSO Username} at {Partner Name} OPTIONAL Include all of the following information in the What do you expect to see field to have a Red Hat associate address your case and create the first certification request for you: Name of the Cloud or the Cloud Service Offering Public Catalog URL/Public URL of the Cloud or Cloud Service Offering Supported Regions (Global/Australia & New Zealand/ASEAN/EMEA/Japan/LATAM/North America/Public Sector): Supported Languages Any 3rd Party Certifications Acquired (E.g. FedRAMP, Systrust, SAS 70, PCI, Other Non-NA Certs, etc.): RHEL Version (8.x or 9.x) of the first certification desired to be achieved Attach File: Attached Partner Brand Logo (PNG 256x256) After reviewing your request, a member from the CCSP certification team will contact you and help you to proceed with the certification process. 2.2. Onboarding new certification partners Creating a new Red Hat account is the first step in onboarding new certification partners. Access Red Hat Customer Portal and click Register . Enter the following details to create a new Red Hat account: Select Corporate in the Account Type field. If you have created a Corporate type account and require an account number, contact the Red Hat global customer service team . Note Ensure that you create a company account and not a personal account. The account created during this step is also used to sign in to the Red Hat Ecosystem Catalog when working with certification requests. Choose a Red Hat login and password. Important If your login ID is associated with multiple accounts, then do not use your contact email as the login ID as this can cause issues during login. Also, you cannot change your login ID once created. Enter your Personal information and Company information . Click Create My Account . A new Red Hat account is created. After you have an active membership in the CCSP program and an SSO account, the account must be entitled with certification privileges. To do this, open a case and include the following information in the Problem Statement field: Problem Statement: Partner Certification: CCSP Certification Access for {Red Hat SSO Username} at {Partner Name} OPTIONAL Include all of the following information in the What do you expect to see field to have a Red Hat associate address your case and create the first certification request for you: Name of the Cloud or the Cloud Service Offering Public Catalog URL/Public URL of the Cloud or Cloud Service Offering Supported Regions (Global/Australia & New Zealand/ASEAN/EMEA/Japan/LATAM/North America/Public Sector): Supported Languages Any 3rd Party Certifications Acquired (E.g. FedRAMP, Systrust, SAS 70, PCI, Other Non-NA Certs, etc.): RHEL Version (8.x or 9.x) of the first certification desired to be achieved Attach File: Attached Partner Brand Logo (PNG 256x256) After reviewing your request, a member from the CCSP certification team will contact you and help you to proceed with the certification process. 2.3. Cloud certification types Cloud certifications are basically classified into two main categories: Cloud Instance Type Certification Cloud Image Certification Note Certify your product by following the Red Hat Hardware certification process before obtaining Cloud Instance Type certification. For more information about Red Hat Hardware Certification see Red Hat Hardware Certification Test Suite User Guide. 2.3.1. Cloud Instance Type certification Cloud Instance Type Certification is required for CCSP partners offering multiple, distinct types of hardware to run Red Hat products. This certification type is an acknowledgement of Red Hat product validation and provides visibility for end-customers. This visibility is an indication of support for various cloud provider hardware and/or virtual machine types. It provides end-customers with the knowledge that they will have a fully supported and exceptional experience when running their Red Hat workloads on the cloud hardware and/or virtual machine. 2.4. Cloud certification requirements Review and ensure to comply with the following cloud certification requirements before proceeding with cloud certifications. Note Certify your product by following the Red Hat Hardware certification process before obtaining Cloud Instance Type certification. For more information about Red Hat Hardware Certification see Red Hat Hardware Certification Test Suite User Guide . 2.4.1. Certification requirements for Cloud Instance Type To gain a Cloud Instance Type certification, it is expected to certify the Cloud Instances in catalog. The certification process provides your Red Hat customers with the assurance that they will have a consistent experience across cloud instance providers, the customer's experience comes with the highest level of support, and good security practices are available to the customers. The Instance needs to meet the its complete list of requirements and policies that are outlined in the Cloud Instance Type Certification Policy Guide . The test plan is created based on certification policy. Additional resources To know about the benefits of being a Red Hat partner, see Red Hat partner benefits . For more information about certification specific policies and requirements, see Red Hat Cloud Instance Type Policy Guide .
null
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_workflow/assembly_onboarding-certification-partners_cloud-instance-wf-introduction
Chapter 11. Configuring seccomp profiles
Chapter 11. Configuring seccomp profiles An OpenShift Container Platform container or a pod runs a single application that performs one or more well-defined tasks. The application usually requires only a small subset of the underlying operating system kernel APIs. Secure computing mode, seccomp, is a Linux kernel feature that can be used to limit the process running in a container to only using a subset of the available system calls. The restricted-v2 SCC applies to all newly created pods in 4.11. The default seccomp profile runtime/default is applied to these pods. Seccomp profiles are stored as JSON files on the disk. Important Seccomp profiles cannot be applied to privileged containers. 11.1. Verifying the default seccomp profile applied to a pod OpenShift Container Platform ships with a default seccomp profile that is referenced as runtime/default . In 4.11, newly created pods have the Security Context Constraint (SCC) set to restricted-v2 and the default seccomp profile applies to the pod. Procedure You can verify the Security Context Constraint (SCC) and the default seccomp profile set on a pod by running the following commands: Verify what pods are running in the namespace: USD oc get pods -n <namespace> For example, to verify what pods are running in the workshop namespace run the following: USD oc get pods -n workshop Example output NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s Inspect the pods: USD oc get pod parksmap-1-4xkwf -n workshop -o yaml Example output apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2 1 The restricted-v2 SCC is added by default if your workload does not have access to a different SCC. 2 Newly created pods in 4.11 will have the seccomp profile configured to runtime/default as mandated by the SCC. 11.1.1. Upgraded cluster In clusters upgraded to 4.11 all authenticated users have access to the restricted and restricted-v2 SCC. A workload admitted by the SCC restricted for example, on a OpenShift Container Platform v4.10 cluster when upgraded may get admitted by restricted-v2 . This is because restricted-v2 is the more restrictive SCC between restricted and restricted-v2 . Note The workload must be able to run with retricted-v2 . Conversely with a workload that requires privilegeEscalation: true this workload will continue to have the restricted SCC available for any authenticated user. This is because restricted-v2 does not allow privilegeEscalation . 11.1.2. Newly installed cluster For newly installed OpenShift Container Platform v4.11 cluster, the restricted-v2 replaces the restricted SCC as an SCC that is available to be used by any authenticated user. A workload with privilegeEscalation: true , is not admitted into the cluster since restricted-v2 is the only SCC available for authenticated users by default. The feature privilegeEscalation is allowed by restricted but not by restricted-v2 . More features are denied by restricted-v2 than were allowed by restricted SCC. A workload with privilegeEscalation: true may be admitted into a newly installed OpenShift Container Platform v4.11 cluster. To give access to the restricted SCC to the ServiceAccount running the workload (or any other SCC that can admit this workload) using a RoleBinding run the following command: USD oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name> In OpenShift Container Platform 4.11 the ability to add the pod annotations seccomp.security.alpha.kubernetes.io/pod: runtime/default and container.seccomp.security.alpha.kubernetes.io/<container_name>: runtime/default is deprecated. 11.2. Configuring a custom seccomp profile You can configure a custom seccomp profile, which allows you to update the filters based on the application requirements. This allows cluster administrators to have greater control over the security of workloads running in OpenShift Container Platform. Seccomp security profiles list the system calls (syscalls) a process can make. Permissions are broader than SELinux, which restrict operations, such as write , system-wide. 11.2.1. Creating seccomp profiles You can use the MachineConfig object to create profiles. Seccomp can restrict system calls (syscalls) within a container, limiting the access of your application. Prerequisites You have cluster admin permissions. You have created a custom security context constraints (SCC). For more information, see Additional resources . Procedure Create the MachineConfig object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json 11.2.2. Setting up the custom seccomp profile Prerequisite You have cluster administrator permissions. You have created a custom security context constraints (SCC). For more information, see "Additional resources". You have created a custom seccomp profile. Procedure Upload your custom seccomp profile to /var/lib/kubelet/seccomp/<custom-name>.json by using the Machine Config. See "Additional resources" for detailed steps. Update the custom SCC by providing reference to the created custom seccomp profile: seccompProfiles: - localhost/<custom-name>.json 1 1 Provide the name of your custom seccomp profile. 11.2.3. Applying the custom seccomp profile to the workload Prerequisite The cluster administrator has set up the custom seccomp profile. For more details, see "Setting up the custom seccomp profile". Procedure Apply the seccomp profile to the workload by setting the securityContext.seccompProfile.type field as following: Example spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1 1 Provide the name of your custom seccomp profile. Alternatively, you can use the pod annotations seccomp.security.alpha.kubernetes.io/pod: localhost/<custom-name>.json . However, this method is deprecated in OpenShift Container Platform 4.11. During deployment, the admission controller validates the following: The annotations against the current SCCs allowed by the user role. The SCC, which includes the seccomp profile, is allowed for the pod. If the SCC is allowed for the pod, the kubelet runs the pod with the specified seccomp profile. Important Ensure that the seccomp profile is deployed to all worker nodes. Note The custom SCC must have the appropriate priority to be automatically assigned to the pod or meet other conditions required by the pod, such as allowing CAP_NET_ADMIN. 11.3. Additional resources Managing security context constraints Post-installation machine configuration tasks
[ "oc get pods -n <namespace>", "oc get pods -n workshop", "NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s", "oc get pod parksmap-1-4xkwf -n workshop -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2", "oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json", "seccompProfiles: - localhost/<custom-name>.json 1", "spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/security_and_compliance/seccomp-profiles
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) bare metal clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on bare metal. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preface-baremetal
Chapter 3. Updating Red Hat OpenShift Data Foundation 4.17 to 4.18
Chapter 3. Updating Red Hat OpenShift Data Foundation 4.17 to 4.18 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.18 directly from any version older than 4.17 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.18.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Optional: To reduce the upgrade time for large clusters that are using CSI plugins, make sure to tune the following parameters in the rook-ceph-operator-config configmap to a higher count or percentage. CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE Note By default, the rook-ceph-operator-config configmap is empty and you need to add the data key. This affects CephFS and CephRBD daemonsets and allows the pods to restart simultaneously or be unavailable and reduce the upgrade time. For an optimal value, you can set the parameter values to 20%. However, if the value is too high, disruption for new volumes might be observed during the upgrade. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.18 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide .
[ "oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/updating_openshift_data_foundation/updating-ocs-to-odf_rhodf
Chapter 3. Bug fixes
Chapter 3. Bug fixes 3.1. Extension secrets are lost on workspace restart Extension secrets for Visual Studio Code - Open Source ("Code - OSS") are no longer lost after workspace restart, but encrypted and persisted in the browser's local storage. This allows extensions like Ansible that use Visual Studio Code Secrets Storage API to persist the data between workspace restarts in the same browser. Additional resources CRW-5942 3.2. Unable to resolve parent devfile when using self-signed certs in disconnected clusters Previously, when you used self-signed certificates on an air-gapped cluster, starting a workspace that referenced a parent devfile by URI would fail with x509: certificate signed by unknown authority error. The defect has been fixed in this release and you can now reference a parent devfile in disconnected clusters. Additional resources CRW-6001 PROJECTS_ROOT environment variable being set incorrectly at workspace startup Previously, PROJECTS_ROOT environment variable was set incorrectly to /projects/projects after workspace startup. The defect has been fixed in this release and the environment variable correctly points to the /projects directory. Additional resources CRW-6025 3.3. The workspace status changed unexpectedly to 'Starting' Previously during a workspace startup, the status could have unexpectedly changed to 'Starting' The defect has been fixed in this release, and status changes (except 'Failed' and 'Terminating') are ignored during workspace startup. Additional resources CRW-6281 3.4. The dashboard pod frequently restarts with exitCode: 137 Previously, the dashboard pod might have been frequently restarting with exitCode: 137 due to a memory leak which has been fixed in this release. Additional resources CRW-6292 3.5. Dashboard URL is unavailable for a few seconds when pod is deleted and restarted Previously, when a pod was restarted, the Dashboard URL could become unavailable for a short period of time during the operator update. The problem has been fixed in this release by adding the appropriate LivenessProbe`and `ReadinessProbe to the Gateway. Additional resources CRW-6524 3.6. After revoking the OAuth application the 'Authorization' indicator is still active in the 'User Preferences' Dashboard The defect related to the misleading status of the 'Authorization' indicator after OAuth revocation from the Dashboard has been fixed in this release. Additional resources CRW-6525 3.7. Dashboard page is blank if DevWorkspace is missing controller.devfile.io/creator label Previously, if a DevWorkspace object was missing the controller.devfile.io/creator label the User Dashboard displayed a blank page. The defect has been fixed in this release. Additional resources CRW-6526 3.8. Projects are not cloned after restarting the workspace using the 'Restart Workspace from Local Devfile' command Before this release, extra projects added to the devfile from a workspace were not cloned if you restarted the workspace with the 'Restart Workspace from Local Devfile' command. With this release, the issue is fixed. Additional resources CRW-6531 3.9. Workspace action menu remains open Before this release, the action menu items such as 'Open' and 'Stop Workspace' remained open on the User Dashboard after you clicked them. With this release, the issue is fixed. Additional resources CRW-6534 3.10. Error when starting a workspace with df and override.devfileFilename URL parameters from the dashboard The defect related to the errors during a workspace startup with df , override.devfileFilename parameters has been fixed in this release. Additional resources CRW-6535 3.11. Tooling container USDPATH is overridden Before this release, process.env.PATH was overridden by userShellEnv.PATH environment variable. With this release, the values of the process.env.PATH and userShellEnv.PATH environment variables are merged. Additional resources CRW-6536
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/release_notes_and_known_issues/bug-fixes
5.7. Limitations
5.7. Limitations The client setting of transaction isolation level is not propagated to the connectors. The transaction isolation level can be set on each XA connector, however this isolation level is fixed and cannot be changed at runtime for specific connections/commands.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/limitations1
Migration Toolkit for Containers
Migration Toolkit for Containers OpenShift Container Platform 4.11 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migration_toolkit_for_containers/index
2.5.4.3. The sadc command
2.5.4.3. The sadc command As stated earlier, the sadc command collects system utilization data and writes it to a file for later analysis. By default, the data is written to files in the /var/log/sa/ directory. The files are named sa <dd> , where <dd> is the current day's two-digit date. sadc is normally run by the sa1 script. This script is periodically invoked by cron via the file sysstat , which is located in /etc/cron.d/ . The sa1 script invokes sadc for a single one-second measuring interval. By default, cron runs sa1 every 10 minutes, adding the data collected during each interval to the current /var/log/sa/sa <dd> file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-resource-tools-sar-sadc
Chapter 7. Migrating the database type from BDB to LMDB on an existing DS instance
Chapter 7. Migrating the database type from BDB to LMDB on an existing DS instance If you have an instance with the Berkeley Database (BDB) backend, you can change the BDB on this instance to Lightning Memory-Mapped Database (LMDB). Note Migration from BDB to LMDB is available only for instances with Directory Server version 12.5 or later. In mixed environment, consider the following limitations: You cannot use a backup to restore an instance with a different database type, because backup and restore formats are tied to this type. You cannot mix backends with different types on an instance. However, the following mix of implementations is possible: You can mix instances with different backend types on a host. You can mix replicas with different types in your replicated topology. Currently, you can use only the command line to migrate from BDB to LMDB or backwards. 7.1. Migrating the database type from BDB to LMDB using dsctl You can use the dsctl utility to automatically migrate the Berkeley Database (BDB) backend on an instance to the Lightning Memory-Mapped Database (LMDB). Prerequisites You have root permissions. Procedure Start the migration from BDB to LMDB: The command sets the nsslapd-backend-implement global configuration parameter to mdb and calculates the database size that you can adjust by setting the nsslapd-mdb-max-size parameter value. Remove the migration .ldif file and the old database: Note that you can migrate back from LMDB to BDB by using the dsctl instance_name dblib mdb2bdb command. Verification Check that the nsslapd-backend-implement configuration parameter value is set to mdb : Additional resources nsslapd-backend-implement mdb attributes 7.2. Manually migrating the database type from BDB to LMDB Use manual migration from the Berkeley Database (BDB) backend on an instance to the Lightning Memory-Mapped Database (LMDB) in the following cases: The migration by the dsctl utility cannot be performed. You want to set LMDB configuration attributes manually. Prerequisites You have root permissions. You have the Directory Manager password. Procedure Export all your databases to an LDIF file as described in Exporting data using the command line while the server is offline . Determine the LMDB database maximum size by clarifying the existing databases size and adding a safety margin of 20%. Skip this step if you migrate back from LMDB to BDB. Clarify the existing databases size: Add the safety margin of 20%: You require the maximum database size in the future steps. Change the database type to LMDB on the instance and restart the instance: When you migrate back from LMDB to BDB, set the --db-lib option to bdb . Set the LMDB maximum size to the value you calculated in the steps (2.16 GB) in bytes and restart the instance: The command sets the nsslapd-mdb-max-size configuration parameter value. Note Skip this step if you migrate back from LMDB to BDB. Import all databases from the LDIF file as described in Importing data using the command line while the server is offline . Verification Check that the nsslapd-backend-implement configuration parameter value is set to mdb :
[ "dsctl instance_name dblib bdb2mdb Backends importation 0.000000% (userroot) Backends importation 100%", "dsctl instance_name dblib cleanup cleanup dbmapdir=/var/lib/dirsrv/slapd-instance_name/db dbhome=/dev/shm/slapd-instance_name dblib=mdb", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config get | grep nsslapd-backend-implement Enter password for cn=Directory Manager on ldap://server.example.com: password nsslapd-backend-implement: mdb", "du -hs /var/lib/dirsrv/slapd-instance_name/db/*/ 1.8GB", "1.8 GB + 20% = 2.16 GB", "dsconf instance_name backend config set --db-lib mdb Successfully updated database configuration dsctl instance_name restart Instance \"instance_name\" has been restarted", "dsconf instance_name backend config set --mdb-max-size 2319282339 Successfully updated database configuration dsctl instance_name restart Instance \"instance_name\" has been restarted", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config get | grep nsslapd-backend-implement Enter password for cn=Directory Manager on ldap://server.example.com: password nsslapd-backend-implement: mdb" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/migrating-the-database-type-from-bdb-to-lmdb-on-an-existing-ds-instance_installing-rhds
4.2.2. Cache Memory
4.2.2. Cache Memory The purpose of cache memory is to act as a buffer between the very limited, very high-speed CPU registers and the relatively slower and much larger main system memory -- usually referred to as RAM [11] . Cache memory has an operating speed similar to the CPU itself so, when the CPU accesses data in cache, the CPU is not kept waiting for the data. Cache memory is configured such that, whenever data is to be read from RAM, the system hardware first checks to determine if the desired data is in cache. If the data is in cache, it is quickly retrieved, and used by the CPU. However, if the data is not in cache, the data is read from RAM and, while being transferred to the CPU, is also placed in cache (in case it is needed again later). From the perspective of the CPU, all this is done transparently, so that the only difference between accessing data in cache and accessing data in RAM is the amount of time it takes for the data to be returned. In terms of storage capacity, cache is much smaller than RAM. Therefore, not every byte in RAM can have its own unique location in cache. As such, it is necessary to split cache up into sections that can be used to cache different areas of RAM, and to have a mechanism that allows each area of cache to cache different areas of RAM at different times. Even with the difference in size between cache and RAM, given the sequential and localized nature of storage access, a small amount of cache can effectively speed access to a large amount of RAM. When writing data from the CPU, things get a bit more complicated. There are two different approaches that can be used. In both cases, the data is first written to cache. However, since the purpose of cache is to function as a very fast copy of the contents of selected portions of RAM, any time a piece of data changes its value, that new value must be written to both cache memory and RAM. Otherwise, the data in cache and the data in RAM would no longer match. The two approaches differ in how this is done. One approach, known as write-through caching, immediately writes the modified data to RAM. Write-back caching, however, delays the writing of modified data back to RAM. The reason for doing this is to reduce the number of times a frequently-modified piece of data must be written back to RAM. Write-through cache is a bit simpler to implement; for this reason it is most common. Write-back cache is a bit trickier to implement; in addition to storing the actual data, it is necessary to maintain some sort of mechanism capable of flagging the cached data as clean (the data in cache is the same as the data in RAM), or dirty (the data in cache has been modified, meaning that the data in RAM is no longer current). It is also necessary to implement a way of periodically flushing dirty cache entries back to RAM. 4.2.2.1. Cache Levels Cache subsystems in present-day computer designs may be multi-level; that is, there might be more than one set of cache between the CPU and main memory. The cache levels are often numbered, with lower numbers being closer to the CPU. Many systems have two cache levels: L1 cache is often located directly on the CPU chip itself and runs at the same speed as the CPU L2 cache is often part of the CPU module, runs at CPU speeds (or nearly so), and is usually a bit larger and slower than L1 cache Some systems (normally high-performance servers) also have L3 cache, which is usually part of the system motherboard. As might be expected, L3 cache would be larger (and most likely slower) than L2 cache. In either case, the goal of all cache subsystems -- whether single- or multi-level -- is to reduce the average access time to the RAM. [11] While "RAM" is an acronym for "Random Access Memory," and a term that could easily apply to any storage technology allowing the non-sequential access of stored data, when system administrators talk about RAM they invariably mean main system memory.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-memory-cache
Chapter 17. Network policy
Chapter 17. Network policy 17.1. About network policy As a cluster administrator, you can define network policies that restrict traffic to pods in your cluster. 17.1.1. About network policy In a cluster using a Kubernetes Container Network Interface (CNI) plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.11, OpenShift SDN supports using network policy in its default network isolation mode. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 17.1.1.1. Using the allow-from-router network policy Use the following NetworkPolicy to allow external traffic regardless of the router configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" 1 podSelector: {} policyTypes: - Ingress 1 policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes. 17.1.1.2. Using the allow-from-hostnetwork network policy Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: "" podSelector: {} policyTypes: - Ingress 17.1.2. Optimizations for network policy Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. Note The guidelines for efficient use of network policy rules applies to only the OpenShift SDN cluster network provider. It is inefficient to apply NetworkPolicy objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a podSelector . For example, if the spec podSelector and the ingress podSelector within a NetworkPolicy object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. When designing your network policy, refer to the following guidelines: Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. NetworkPolicy objects that select a whole namespace, by using the namespaceSelector or an empty podSelector , generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. 17.1.3. steps Creating a network policy Optional: Defining a default network policy 17.1.4. Additional resources Projects and namespaces Configuring multitenant network policy NetworkPolicy API 17.2. Logging network policy events As a cluster administrator, you can configure network policy audit logging for your cluster and enable logging for one or more namespaces. Note Audit logging of network policies is available for only the OVN-Kubernetes cluster network provider . 17.2.1. Network policy audit logging The OVN-Kubernetes cluster network provider uses Open Virtual Network (OVN) ACLs to manage network policy. Audit logging exposes allow and deny ACL events. You can configure the destination for network policy audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster. Network policy audit logging is enabled per namespace by annotating the namespace with the k8s.ovn.org/acl-logging key as in the following example: Example namespace annotation kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { "deny": "info", "allow": "info" } The logging format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0 . An example log entry might resemble the following: Example ACL deny log entry 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 The following table describes namespace annotation values: Table 17.1. Network policy audit logging namespace annotation Annotation Value k8s.ovn.org/acl-logging You must specify at least one of allow , deny , or both to enable network policy audit logging for a namespace. deny Optional: Specify alert , warning , notice , info , or debug . allow Optional: Specify alert , warning , notice , info , or debug . 17.2.2. Network policy audit configuration The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates default values for network policy audit logging feature. Audit logging configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 The following table describes the configuration fields for network policy audit logging. Table 17.2. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . 17.2.3. Configuring network policy auditing for a cluster As a cluster administrator, you can customize network policy audit logging for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To customize the network policy audit logging configuration, enter the following command: USD oc edit network.operator.openshift.io/cluster Tip You can alternatively customize and apply the following YAML to configure audit logging: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 Verification To create a namespace with network policies complete the following steps: Create a namespace for verification: USD cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOF Example output namespace/verify-audit-logging created Enable audit logging: USD oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "alert" }' namespace/verify-audit-logging annotated Create network policies for the namespace: USD cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: namespace: verify-audit-logging EOF Example output networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created Create a pod for source traffic in the default namespace: USD cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF Create two pods in the verify-audit-logging namespace: USD for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF done Example output pod/client created pod/server created To generate traffic and produce network policy audit log entries, complete the following steps: Obtain the IP address for pod named server in the verify-audit-logging namespace: USD POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') Ping the IP address from the command from the pod named client in the default namespace and confirm that all packets are dropped: USD oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed: USD oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_allow-from-same-namespace_0", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_allow-from-same-namespace_0", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 17.2.4. Enabling network policy audit logging for a namespace As a cluster administrator, you can enable network policy audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To enable network policy audit logging for a namespace, enter the following command: USD oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }' where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to enable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" } Example output namespace/verify-audit-logging annotated Verification Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-logging_deny-all", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 17.2.5. Disabling network policy audit logging for a namespace As a cluster administrator, you can disable network policy audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable network policy audit logging for a namespace, enter the following command: USD oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging- where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to disable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null Example output namespace/verify-audit-logging annotated 17.2.6. Additional resources About network policy 17.3. Creating a network policy As a user with the admin role, you can create a network policy for a namespace. 17.3.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 17.3.2. Creating a network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 17.3.3. Additional resources Accessing the web console 17.4. Viewing a network policy As a user with the admin role, you can view a network policy for a namespace. 17.4.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 17.4.2. Viewing network policies using the CLI You can examine the network policies in a namespace. Note If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command: USD oc get networkpolicy Optional: To examine a specific network policy, enter the following command: USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example: USD oc describe networkpolicy allow-same-namespace Output for oc describe command Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 17.5. Editing a network policy As a user with the admin role, you can edit an existing network policy for a namespace. 17.5.1. Editing a network policy You can edit a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure Optional: To list the network policy objects in a namespace, enter the following command: USD oc get networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the network policy object. If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the network policy object directly, enter the following command: USD oc edit networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the network policy object is updated. USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 17.5.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 17.5.3. Additional resources Creating a network policy 17.6. Deleting a network policy As a user with the admin role, you can delete a network policy from a namespace. 17.6.1. Deleting a network policy using the CLI You can delete a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure To delete a network policy object, enter the following command: USD oc delete networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 17.7. Defining a default network policy for projects As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one. 17.7.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> After you save your changes, create a new project to verify that your changes were successfully applied. 17.7.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 17.8. Configuring multitenant isolation with network policy As a cluster administrator, you can configure your network policies to provide multitenant network isolation. Note If you are using the OpenShift SDN cluster network provider, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set. 17.8.1. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 17.8.2. steps Defining a default network policy 17.8.3. Additional resources OpenShift SDN network isolation modes
[ "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress", "kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }", "2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "oc edit network.operator.openshift.io/cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF", "namespace/verify-audit-logging created", "oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"alert\" }'", "namespace/verify-audit-logging annotated", "cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: namespace: verify-audit-logging EOF", "networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created", "cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF", "for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done", "pod/client created pod/server created", "POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')", "oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms", "oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }", "namespace/verify-audit-logging annotated", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null", "namespace/verify-audit-logging annotated", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/default-deny created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "oc get networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy <policy_name> -n <namespace>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/default-deny deleted", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/network-policy
Chapter 5. Network File System
Chapter 5. Network File System A Network File System ( NFS ) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. [5] In Red Hat Enterprise Linux, the nfs-utils package is required for full NFS support. Run the rpm -q nfs-utils command to see if the nfs-utils is installed. If it is not installed and you want to use NFS, run the following command as the root user to install it: 5.1. NFS and SELinux When running SELinux, the NFS daemons are confined by default. SELinux policy allows NFS to share files by default. [5] Refer to the Storage Administration Guide for more information.
[ "~]# yum install nfs-utils" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/chap-managing_confined_services-network_file_system
Chapter 1. About model serving
Chapter 1. About model serving When you serve a model, you upload a trained model into Red Hat OpenShift AI for querying, which allows you to integrate your trained models into intelligent applications. You can upload a model to an S3-compatible object storage, persistent volume claim, or Open Container Initiative (OCI) image. You can then access and train the model from your project workbench. After training the model, you can serve or deploy the model using a model-serving platform. Serving or deploying the model makes the model available as a service, or model runtime server, that you can access using an API. You can then access the inference endpoints for the deployed model from the dashboard and see predictions based on data inputs that you provide through API calls. Querying the model through the API is also called model inferencing. You can serve models on one of the following model-serving platforms: Single-model serving platform Multi-model serving platform NVIDIA NIM model serving platform The model-serving platform that you choose depends on your business needs: If you want to deploy each model on its own runtime server, or want to use a serverless deployment, select the single-model serving platform . The single-model serving platform is recommended for production use. If you want to deploy multiple models with only one runtime server, select the multi-model serving platform . This option is best if you are deploying more than 1,000 small and medium models and want to reduce resource consumption. If you want to use NVIDIA Inference Microservices (NIMs) to deploy a model, select the NVIDIA NIM-model serving platform . 1.1. Single-model serving platform You can deploy each model from a dedicated model serving on the single-model serving platform. Deploying models from a dedicated model server can help you deploy, monitor, scale, and maintain models that require increased resources. This model serving platform is deal for serving large models. The single-model serving platform is based on the KServe component. The single-model serving platform is helpful for use cases such as: Large language models (LLMs) Generative AI For more information about setting up the single-model serving platform, see Installing the single-model serving platform . 1.2. Multi-model serving platform You can deploy multiple models from the same model server on the multi-model serving platform. Each of the deployed models shares the server resources. Deploying multiple models from the same model server can be advantageous on OpenShift clusters that have finite compute resources or pods. This model serving platform is ideal for serving small and medium models in large quantities. The multi-model serving platform is based on the ModelMesh component. For more information about setting up the multi-model serving platform, see Installing the multi-model serving platform . 1.3. NVIDIA NIM model serving platform You can deploy models using NVIDIA Inference Microservices (NIM) on the NVIDIA NIM model serving platform. NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations. NVIDIA NIM inference services are helpful for use cases such as: Using GPU-accelerated containers inferencing models optimized by NVIDIA Deploying generative AI for virtual screening, content generation, and avatar creation The NVIDIA NIM model serving platform is based on the single-model serving platform. To use the NVIDIA NIM model serving platform, you must first install the single-model serving platform. For more information, see Installing the single-model serving platform .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/serving_models/about-model-serving_about-model-serving
12.7. Using the Clustered Samba Server
12.7. Using the Clustered Samba Server Clients can connect to the Samba share that was exported by connecting to one of the IP addresses specified in the /etc/ctdb/public_addresses file, or using the csmb-server DNS entry we configured earlier, as shown below: or
[ "mount -t cifs //csmb-server/csmb /mnt/sambashare -o user=testmonkey", "[user@clusmb-01 ~]USD smbclient //csmb-server/csmb" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-using-samba-ca
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1]
Chapter 3. OAuthAuthorizeToken [oauth.openshift.io/v1] Description OAuthAuthorizeToken describes an OAuth authorization token Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources clientName string ClientName references the client that created this token. codeChallenge string CodeChallenge is the optional code_challenge associated with this authorization code, as described in rfc7636 codeChallengeMethod string CodeChallengeMethod is the optional code_challenge_method associated with this authorization code, as described in rfc7636 expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURI string RedirectURI is the redirection associated with the token. scopes array (string) Scopes is an array of the requested scopes. state string State data from request userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token. UserUID and UserName must both match for this token to be valid. 3.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthauthorizetokens DELETE : delete collection of OAuthAuthorizeToken GET : list or watch objects of kind OAuthAuthorizeToken POST : create an OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens GET : watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} DELETE : delete an OAuthAuthorizeToken GET : read the specified OAuthAuthorizeToken PATCH : partially update the specified OAuthAuthorizeToken PUT : replace the specified OAuthAuthorizeToken /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} GET : watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/oauth.openshift.io/v1/oauthauthorizetokens HTTP method DELETE Description delete collection of OAuthAuthorizeToken Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthAuthorizeToken Table 3.3. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeTokenList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthAuthorizeToken Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.2. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens HTTP method GET Description watch individual changes to a list of OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/oauth.openshift.io/v1/oauthauthorizetokens/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken HTTP method DELETE Description delete an OAuthAuthorizeToken Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 202 - Accepted OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthAuthorizeToken Table 3.11. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthAuthorizeToken Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthAuthorizeToken Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body OAuthAuthorizeToken schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK OAuthAuthorizeToken schema 201 - Created OAuthAuthorizeToken schema 401 - Unauthorized Empty 3.2.4. /apis/oauth.openshift.io/v1/watch/oauthauthorizetokens/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the OAuthAuthorizeToken HTTP method GET Description watch changes to an object of kind OAuthAuthorizeToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/oauth_apis/oauthauthorizetoken-oauth-openshift-io-v1
Chapter 5. Preparing your environment for managing IdM using Ansible playbooks
Chapter 5. Preparing your environment for managing IdM using Ansible playbooks As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Keep a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. Using this practice, you can find all your playbooks in one place. Note You can run your ansible-freeipa playbooks without invoking root privileges on the managed nodes. Exceptions include playbooks that use the ipaserver , ipareplica , ipaclient , ipasmartcard_server , ipasmartcard_client and ipabackup ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. The playbooks in the Red Hat Enterprise Linux IdM documentation assume the following security configuration : The IdM admin is your remote Ansible user on the managed nodes. You store the IdM admin password encrypted in an Ansible vault. You have placed the password that protects the Ansible vault in a password file. You block access to the vault password file to everyone except your local ansible user. You regularly remove and re-create the vault password file. Consider also alternative security configurations . 5.1. Preparing a control node and managed nodes for managing IdM using Ansible playbooks Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: These commands require that you enter the IdM admin password. Create a password_file file that contains the vault password: Change the permissions to modify the file: Create a secret.yml Ansible vault to store the IdM admin password: Configure password_file to store the vault password: When prompted, enter the content of the secret.yml file: Note To use the encrypted ipaadmin_password in a playbook, you must use the vars_file directive. For example, a simple playbook to delete an IdM user can look as follows: When executing a playbook, instruct Ansible use the vault password to decrypt ipaadmin_password by adding the --vault-password-file= password_file option. For example: Warning For security reasons, remove the vault password file at the end of each session, and repeat steps 6-8 at the start of each new session. Additional resources Different methods to provide the credentials required for ansible-freeipa playbooks Installing an Identity Management server using an Ansible playbook How to build your inventory 5.2. Different methods to provide the credentials required for ansible-freeipa playbooks There are advantages and disadvantages in the different methods for providing the credentials required for running playbooks that use ansible-freeipa roles and modules. Storing passwords in plain text in a playbook Benefits : Not being prompted all the time you run the playbook. Easy to implement. Drawbacks : Everyone with access to the file can read the password. Setting wrong permissions and sharing the file, for example in an internal or external repository, can compromise security. High maintenance work: if the password is changed, it needs to be changed in all playbooks. Entering passwords interactively when you execute a playbook Benefits : No-one can steal the password as it is not stored anywhere. You can update the password easily. Easy to implement. Drawbacks : If you are using Ansible playbooks in scripts, the requirement to enter the password interactively can be inconvenient. Storing passwords in an Ansible vault and the vault password in a file: Benefits : The user password is stored encrypted. You can update the user password easily, by creating a new Ansible vault. You can update the password file that protects the ansible vault easily, by using the ansible-vault rekey --new-vault-password-file=NEW_VAULT_PASSWORD_FILE secret.yml command. If you are using Ansible playbooks in scripts, it is convenient not to have to enter the password protecting the Ansible vault interactively. Drawbacks : It is vital that the file that contains the sensitive plain text password be protected through file permissions and other security measures. Storing passwords in an Ansible vault and entering the vault password interactively Benefits : The user password is stored encrypted. No-one can steal the vault password as it is not stored anywhere. You can update the user password easily, by creating a new Ansible vault. You can update the vault password easily too, by using the ansible-vault rekey file_name command. Drawbacks : If you are using Ansible playbooks in scripts, the need to enter the vault password interactively can be inconvenient. Additional resources Preparing a control node and managed nodes for managing IdM using Ansible playbooks What is Zero trust? Protecting sensitive data with Ansible vault
[ "cd ~/MyPlaybooks", "[defaults] inventory = /home/ your_username /MyPlaybooks/inventory remote_user = admin", "[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us", "ssh-keygen", "ssh-copy-id [email protected] ssh-copy-id [email protected]", "redhat", "chmod 0600 password_file", "ansible-vault create --vault-password-file=password_file secret.yml", "ipaadmin_password: Secret123", "--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/ user_name /MyPlaybooks/secret.yml tasks: - name: Delete user robot ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: robot state: absent", "ansible-playbook -i inventory --vault-password-file=password_file del-user.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/preparing-your-environment-for-managing-idm-using-ansible-playbooks_using-ansible-to-install-and-manage-idm
function::proc_mem_txt
function::proc_mem_txt Name function::proc_mem_txt - Program text (code) size in pages Synopsis Arguments None Description Returns the current process text (code) size in pages, or zero when there is no current process or the number of pages couldn't be retrieved.
[ "proc_mem_txt:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-proc-mem-txt
Chapter 10. Configuring AWS STS for Red Hat Quay
Chapter 10. Configuring AWS STS for Red Hat Quay Support for Amazon Web Services (AWS) Security Token Service (STS) is available for standalone Red Hat Quay deployments and Red Hat Quay on OpenShift Container Platform. AWS STS is a web service for requesting temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users and for users that you authenticate, or federated users . This feature is useful for clusters using Amazon S3 as an object storage, allowing Red Hat Quay to use STS protocols to authenticate with Amazon S3, which can enhance the overall security of the cluster and help to ensure that access to sensitive data is properly authenticated and authorized. Configuring AWS STS is a multi-step process that requires creating an AWS IAM user, creating an S3 role, and configuring your Red Hat Quay config.yaml file to include the proper resources. Use the following procedures to configure AWS STS for Red Hat Quay. 10.1. Creating an IAM user Use the following procedure to create an IAM user. Procedure Log in to the Amazon Web Services (AWS) console and navigate to the Identity and Access Management (IAM) console. In the navigation pane, under Access management click Users . Click Create User and enter the following information: Enter a valid username, for example, quay-user . For Permissions options , click Add user to group . On the review and create page, click Create user . You are redirected to the Users page. Click the username, for example, quay-user . Copy the ARN of the user, for example, arn:aws:iam::123492922789:user/quay-user . On the same page, click the Security credentials tab. Navigate to Access keys . Click Create access key . On the Access key best practices & alternatives page, click Command Line Interface (CLI) , then, check the confirmation box. Then click . Optional. On the Set description tag - optional page, enter a description. Click Create access key . Copy and store the access key and the secret access key. Important This is the only time that the secret access key can be viewed or downloaded. You cannot recover it later. However, you can create a new access key any time. Click Done . 10.2. Creating an S3 role Use the following procedure to create an S3 role for AWS STS. Prerequisites You have created an IAM user and stored the access key and the secret access key. Procedure If you are not already, navigate to the IAM dashboard by clicking Dashboard . In the navigation pane, click Roles under Access management . Click Create role . Click Custom Trust Policy , which shows an editable JSON policy. By default, it shows the following information: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": {}, "Action": "sts:AssumeRole" } ] } Under the Principal configuration field, add your AWS ARN information. For example: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123492922789:user/quay-user" }, "Action": "sts:AssumeRole" } ] } Click . On the Add permissions page, type AmazonS3FullAccess in the search box. Check the box to add that policy to the S3 role, then click . On the Name, review, and create page, enter the following information: Enter a role name, for example, example-role . Optional. Add a description. Click the Create role button. You are navigated to the Roles page. Under Role name , the newly created S3 should be available. 10.3. Configuring Red Hat Quay on OpenShift Container Platform to use AWS STS Use the following procedure to edit your Red Hat Quay on OpenShift Container Platform config.yaml file to use AWS STS. Note You can also edit and re-deploy your Red Hat Quay on OpenShift Container Platform config.yaml file directly instead of using the OpenShift Container Platform UI. Prerequisites You have configured a Role ARN. You have generated a User Access Key. You have generated a User Secret Key. Procedure On the Home page of your OpenShift Container Platform deployment, click Operators Installed Operators . Click Red Hat Quay . Click Quay Registry and then the name of your Red Hat Quay registry. Under Config Bundle Secret , click the name of your registry configuration bundle, for example, quay-registry-config-bundle-qet56 . On the configuration bundle page, click Actions to reveal a drop-down menu. Then click Edit Secret . Update your the DISTRIBUTED_STORAGE_CONFIG fields of your config.yaml file with the following information: # ... DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6 # ... 1 The unique Amazon Resource Name (ARN) required when configuring AWS STS 2 The name of your s3 bucket. 3 The storage path for data. Usually /datastorage . 4 Optional. The Amazon Web Services region. Defaults to us-east-1 . 5 The generated AWS S3 user access key required when configuring AWS STS. 6 The generated AWS S3 user secret key required when configuring AWS STS. Click Save . Verification Tag a sample image, for example, busybox , that will be pushed to the repository. For example: USD podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test Push the sample image by running the following command: USD podman push <quay-server.example.com>/<organization_name>/busybox:test Verify that the push was successful by navigating to the Organization that you pushed the image to in your Red Hat Quay registry Tags . Navigate to the Amazon Web Services (AWS) console and locate your s3 bucket. Click the name of your s3 bucket. On the Objects page, click datastorage/ . On the datastorage/ page, the following resources should seen: sha256/ uploads/ These resources indicate that the push was successful, and that AWS STS is properly configured.
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": {}, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::123492922789:user/quay-user\" }, \"Action\": \"sts:AssumeRole\" } ] }", "DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test", "podman push <quay-server.example.com>/<organization_name>/busybox:test" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_operator_features/configuring-aws-sts-quay
APIs
APIs Red Hat Advanced Cluster Management for Kubernetes 2.12 APIs
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/apis/index
Chapter 7. Reviewing your Ansible configuration with automation content navigator
Chapter 7. Reviewing your Ansible configuration with automation content navigator As a content creator, you can review your Ansible configuration with automation content navigator and interactively delve into settings. 7.1. Reviewing your Ansible configuration from automation content navigator You can review your Ansible configuration with the automation content navigator text-based user interface in interactive mode and delve into the settings. Automation content navigator pulls in the results from an accessible Ansible configuration file, or returns the defaults if no configuration file is present. Prerequisites You have authenticated to the Red Hat registry if you need to access additional automation execution environments. See Red Hat Container Registry Authentication for details. Procedure Start automation content navigator USD ansible-navigator Optional: type ansible-navigator config from the command line to access the Ansible configuration settings. Review the Ansible configuration. :config Some values reflect settings from within the automation execution environments needed for the automation execution environments to function. These display as non-default settings you cannot set in your Ansible configuration file. Type the number corresponding to the setting you want to delve into, or type :<number> for numbers greater than 9. ANSIBLE COW ACCEPTLIST (current: ['bud-frogs', 'bunny', 'cheese']) (default: 0│--- 1│current: 2│- bud-frogs 3│- bunny 4│- cheese 5│default: 6│- bud-frogs 7│- bunny 8│- cheese 9│- daemon The output shows the current setting as well as the default . Note the source in this example is env since the setting comes from the automation execution environments. Verification Review the configuration output. Additional resources ansible-config . Introduction to Ansible configuration .
[ "ansible-navigator", ":config", "ANSIBLE COW ACCEPTLIST (current: ['bud-frogs', 'bunny', 'cheese']) (default: 0│--- 1│current: 2│- bud-frogs 3│- bunny 4│- cheese 5│default: 6│- bud-frogs 7│- bunny 8│- cheese 9│- daemon" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_content_navigator_creator_guide/assembly-review-config-navigator_ansible-navigator
Chapter 26. offset
Chapter 26. offset The offset value. Can represent bytes to the start of the log line in the file (zero- or one-based), or log line numbers (zero- or one-based), so long as the values are strictly monotonically increasing in the context of a single log file. The values are allowed to wrap, representing a new version of the log file (rotation). Data type long
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/offset
3.13. Setting Partial Results Mode
3.13. Setting Partial Results Mode Partial results mode is off by default but you can turn it on for all queries in a connection by using either setPartialResultsMode(true) on a DataSource or partialResultsMode=true on a JDBC URL. In either case, you can toggle partial results mode on or off later with a SET statement. This is how you configure the partial results mode using the SET statement:
[ "Statement statement = ...obtain statement from Connection statement.execute(\"set partialResultsMode true\");" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/setting_partial_results_mode1
Chapter 1. The basics of Ceph configuration
Chapter 1. The basics of Ceph configuration As a storage administrator, you need to have a basic understanding of how to view the Ceph configuration, and how to set the Ceph configuration options for the Red Hat Ceph Storage cluster. You can view and set the Ceph configuration options at runtime. Prerequisites Installation of the Red Hat Ceph Storage software. 1.1. Ceph configuration All Red Hat Ceph Storage clusters have a configuration, which defines: Cluster Identity Authentication settings Ceph daemons Network configuration Node names and addresses Paths to keyrings Paths to OSD log files Other runtime options A deployment tool, such as cephadm , will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a Red Hat Ceph Storage cluster without using a deployment tool. Additional Resources For more information about cephadm and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide . 1.2. The Ceph configuration database The Ceph Monitor manages a configuration database of Ceph options that centralize configuration management by storing configuration options for the entire storage cluster. By centralizing the Ceph configuration in a database, this simplifies storage cluster administration. The priority order that Ceph uses to set options is: Compiled-in default values Ceph cluster configuration database Local ceph.conf file Runtime override, using the ceph daemon DAEMON-NAME config set or ceph tell DAEMON-NAME injectargs commands There are still a few Ceph options that can be defined in the local Ceph configuration file, which is /etc/ceph/ceph.conf by default. However, ceph.conf has been deprecated for Red Hat Ceph Storage 7. cephadm uses a basic ceph.conf file that only contains a minimal set of options for connecting to Ceph Monitors, authenticating, and fetching configuration information. In most cases, cephadm uses only the mon_host option. To avoid using ceph.conf only for the mon_host option, use DNS SRV records to perform operations with Monitors. Important Red Hat recommends that you use the assimilate-conf administrative command to move valid options into the configuration database from the ceph.conf file. For more information about assimilate-conf , see Administrative Commands. Ceph allows you to make changes to the configuration of a daemon at runtime. This capability can be useful for increasing or decreasing the logging output, by enabling or disabling debug settings, and can even be used for runtime optimization. Note When the same option exists in the configuration database and the Ceph configuration file, the configuration database option has a lower priority than what is set in the Ceph configuration file. Sections and Masks Just as you can configure Ceph options globally, per daemon type, or by a specific daemon in the Ceph configuration file, you can also configure the Ceph options in the configuration database according to these sections: Section Description global Affects all daemons and clients. mon Affects all Ceph Monitors. mgr Affects all Ceph Managers. osd Affects all Ceph OSDs. mds Affects all Ceph Metadata Servers. client Affects all Ceph Clients, including mounted file systems, block devices, and RADOS Gateways. Ceph configuration options can have a mask associated with them. These masks can further restrict which daemons or clients the options apply to. Masks have two forms: type:location The type is a CRUSH property, for example, rack or host . The location is a value for the property type. For example, host:foo limits the option only to daemons or clients running on the foo host. Example class:device-class The device-class is the name of the CRUSH device class, such as hdd or ssd . For example, class:ssd limits the option only to Ceph OSDs backed by solid state drives (SSD). This mask has no effect on non-OSD daemons of clients. Example Administrative Commands The Ceph configuration database can be administered with the subcommand ceph config ACTION . These are the actions you can do: ls Lists the available configuration options. dump Dumps the entire configuration database of options for the storage cluster. get WHO Dumps the configuration for a specific daemon or client. For example, WHO can be a daemon, like mds.a . set WHO OPTION VALUE Sets a configuration option in the Ceph configuration database, where WHO is the target daemon, OPTION is the option to set, and VALUE is the desired value. show WHO Shows the reported running configuration for a running daemon. These options might be different from those stored by the Ceph Monitors if there is a local configuration file in use or options have been overridden on the command line or at run time. Also, the source of the option values is reported as part of the output. assimilate-conf -i INPUT_FILE -o OUTPUT_FILE Assimilate a configuration file from the INPUT_FILE and move any valid options into the Ceph Monitors' configuration database. Any options that are unrecognized, invalid, or cannot be controlled by the Ceph Monitor return in an abbreviated configuration file stored in the OUTPUT_FILE . This command can be useful for transitioning from legacy configuration files to a centralized configuration database. Note that when you assimilate a configuration and the Monitors or other daemons have different configuration values set for the same set of options, the end result depends on the order in which the files are assimilated. help OPTION -f json-pretty Displays help for a particular OPTION using a JSON-formatted output. Additional Resources For more information about the command, see Setting a specific configuration at runtime . 1.3. Using the Ceph metavariables Metavariables simplify Ceph storage cluster configuration dramatically. When a metavariable is set in a configuration value, Ceph expands the metavariable into a concrete value. Metavariables are very powerful when used within the [global] , [osd] , [mon] , or [client] sections of the Ceph configuration file. However, you can also use them with the administration socket. Ceph metavariables are similar to Bash shell expansion. Ceph supports the following metavariables: USDcluster Description Expands to the Ceph storage cluster name. Useful when running multiple Ceph storage clusters on the same hardware. Example /etc/ceph/USDcluster.keyring Default ceph USDtype Description Expands to one of osd or mon , depending on the type of the instant daemon. Example /var/lib/ceph/USDtype USDid Description Expands to the daemon identifier. For osd.0 , this would be 0 . Example /var/lib/ceph/USDtype/USDcluster-USDid USDhost Description Expands to the host name of the instant daemon. USDname Description Expands to USDtype.USDid . Example /var/run/ceph/USDcluster-USDname.asok 1.4. Viewing the Ceph configuration at runtime The Ceph configuration files can be viewed at boot time and run time. Prerequisites Root-level access to the Ceph node. Access to admin keyring. Procedure To view a runtime configuration, log in to a Ceph node running the daemon and execute: Syntax To see the configuration for osd.0 , you can log into the node containing osd.0 and execute this command: Example For additional options, specify a daemon and help . Example 1.5. Viewing a specific configuration at runtime Configuration settings for Red Hat Ceph Storage can be viewed at runtime from the Ceph Monitor node. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Log into a Ceph node and execute: Syntax Example 1.6. Setting a specific configuration at runtime To set a specific Ceph configuration at runtime, use the ceph config set command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor or OSD nodes. Procedure Set the configuration on all Monitor or OSD daemons : Syntax Example Validate that the option and value are set: Example To remove the configuration option from all daemons: Syntax Example To set the configuration for a specific daemon: Syntax Example To validate that the configuration is set for the specified daemon: Example To remove the configuration for a specific daemon: Syntax Example Note If you use a client that does not support reading options from the configuration database, or if you still need to use ceph.conf to change your cluster configuration for other reasons, run the following command: You must maintain and distribute the ceph.conf file across the storage cluster. 1.7. OSD Memory Target BlueStore keeps OSD heap memory usage under a designated target size with the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. Use this option when TCMalloc is configured as the memory allocator, and when the bluestore_cache_autotune option in BlueStore is set to true . Ceph OSD memory caching is more important when the block device is slow; for example, traditional hard drives, because the benefit of a cache hit is much higher than it would be with a solid state drive. However, this must be weighed into a decision to colocate OSDs with other services, such as in a hyper-converged infrastructure (HCI) or other applications. 1.7.1. Setting the OSD memory target Use the osd_memory_target option to set the maximum memory threshold for all OSDs in the storage cluster, or for specific OSDs. An OSD with an osd_memory_target option set to 16 GB might use up to 16 GB of memory. Note Configuration options for individual OSDs take precedence over the settings for all OSDs. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all hosts in the storage cluster. Procedure To set osd_memory_target for all OSDs in the storage cluster: Syntax VALUE is the number of GBytes of memory to be allocated to each OSD in the storage cluster. To set osd_memory_target for a specific OSD in the storage cluster: Syntax .id is the ID of the OSD and VALUE is the number of GB of memory to be allocated to the specified OSD. For example, to configure the OSD with ID 8 to use up to 16 GBytes of memory: Example To set an individual OSD to use one maximum amount of memory and configure the rest of the OSDs to use another amount, specify the individual OSD first: Example Additional resources To configure Red Hat Ceph Storage to autotune OSD memory usage, see Automatically tuning OSD memory in the Operations Guide . 1.8. Automatically tuning OSD memory The OSD daemons adjust the memory consumption based on the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. If Red Hat Ceph Storage is deployed on dedicated nodes that do not share memory with other services, cephadm automatically adjusts the per-OSD consumption based on the total amount of RAM and the number of deployed OSDs. Important By default, the osd_memory_target_autotune parameter is set to true in the Red Hat Ceph Storage cluster. Syntax Cephadm starts with a fraction mgr/cephadm/autotune_memory_target_ratio , which defaults to 0.7 of the total RAM in the system, subtract off any memory consumed by non-autotuned daemons such as non-OSDS and for OSDs for which osd_memory_target_autotune is false, and then divide by the remaining OSDs. The osd_memory_target parameter is calculated as follows: Syntax SPACE_ALLOCATED_FOR_OTHER_DAEMONS may optionally include the following daemon space allocations: Alertmanager: 1 GB Grafana: 1 GB Ceph Manager: 4 GB Ceph Monitor: 2 GB Node-exporter: 1 GB Prometheus: 1 GB For example, if a node has 24 OSDs and has 251 GB RAM space, then osd_memory_target is 7860684936 . The final targets are reflected in the configuration database with options. You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column. Note The default setting of osd_memory_target_autotune true is unsuitable for hyperconverged infrastructures where compute and Ceph storage services are colocated. In a hyperconverged infrastructure, the autotune_memory_target_ratio can be set to 0.2 to reduce the memory consumption of Ceph. Example You can manually set a specific memory target for an OSD in the storage cluster. Example You can manually set a specific memory target for an OSD host in the storage cluster. Syntax Example Note Enabling osd_memory_target_autotune overwrites existing manual OSD memory target settings. To prevent daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled, set the _no_autotune_memory label on the host. Syntax You can exclude an OSD from memory autotuning by disabling the autotune option and setting a specific memory target. Example 1.9. MDS Memory Cache Limit MDS servers keep their metadata in a separate storage pool, named cephfs_metadata , and are the users of Ceph OSDs. For Ceph File Systems, MDS servers have to support an entire Red Hat Ceph Storage cluster, not just a single storage device within the storage cluster, so their memory requirements can be significant, particularly if the workload consists of small-to-medium-size files, where the ratio of metadata to data is much higher. Example: Set the mds_cache_memory_limit to 2000000000 bytes Note For a large Red Hat Ceph Storage cluster with a metadata-intensive workload, do not put an MDS server on the same node as other memory-intensive services, doing so gives you the option to allocate more memory to MDS, for example, sizes greater than 100 GB. Additional Resources See Metadata Server cache size limits in Red Hat Ceph Storage File System Guide . See the general Ceph configuration options in Configuration options for specific option descriptions and usage.
[ "ceph config set osd/host:magna045 debug_osd 20", "ceph config set osd/class:hdd osd_max_backfills 8", "ceph daemon DAEMON_TYPE . ID config show", "ceph daemon osd.0 config show", "ceph daemon osd.0 help", "ceph daemon DAEMON_TYPE . ID config get PARAMETER", "ceph daemon osd.0 config get public_addr", "ceph config set DAEMON CONFIG-OPTION VALUE", "ceph config set osd debug_osd 10", "ceph config dump osd advanced debug_osd 10/10", "ceph config rm DAEMON CONFIG-OPTION VALUE", "ceph config rm osd debug_osd", "ceph config set DAEMON . DAEMON-NUMBER CONFIG-OPTION VALUE", "ceph config set osd.0 debug_osd 10", "ceph config dump osd.0 advanced debug_osd 10/10", "ceph config rm DAEMON . DAEMON-NUMBER CONFIG-OPTION", "ceph config rm osd.0 debug_osd", "ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf false", "ceph config set osd osd_memory_target VALUE", "ceph config set osd.id osd_memory_target VALUE", "ceph config set osd.8 osd_memory_target 16G", "ceph config set osd osd_memory_target 16G ceph config set osd.8 osd_memory_target 8G", "ceph config set osd osd_memory_target_autotune true", "osd_memory_target = TOTAL_RAM_OF_THE_OSD * (1048576) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - ( SPACE_ALLOCATED_FOR_OTHER_DAEMONS )", "ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2", "ceph config set osd.123 osd_memory_target 7860684936", "ceph config set osd/host: HOSTNAME osd_memory_target TARGET_BYTES", "ceph config set osd/host:host01 osd_memory_target 1000000000", "ceph orch host label add HOSTNAME _no_autotune_memory", "ceph config set osd.123 osd_memory_target_autotune false ceph config set osd.123 osd_memory_target 16G", "ceph_conf_overrides: mds: mds_cache_memory_limit=2000000000" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/configuration_guide/the-basics-of-ceph-configuration
Release Notes for AMQ Streams 2.5 on OpenShift
Release Notes for AMQ Streams 2.5 on OpenShift Red Hat Streams for Apache Kafka 2.5 Highlights of what's new and what's changed with this release of AMQ Streams on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_openshift/index
13.6. Locking File Saving on Disk
13.6. Locking File Saving on Disk You can disable the Save and Save As dialogs. This can be useful if you are giving temporary access to a user or you do not want the user to save files to the computer. Important This feature will only work in applications which support it. Not all GNOME and third party applications have this feature enabled. These changes will have no effect on applications which do not support this feature. You prevent applications from file saving by locking down the org.gnome.desktop.lockdown.disable-save-to-disk key. Follow the procedure: Procedure 13.6. Locking Down the org.gnome.desktop.lockdown.disable-save-to-disk Key Create the user profile in /etc/dconf/profile/user unless it already exists: Create a local database for machine-wide settings in the /etc/dconf/db/local.d/00-lockdown file. Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/lockdown : Update the system databases by running Having followed these steps, applications supporting this lockdown key, for example Videos , Image Viewer , Evolution , Document Viewer , or GNOME Shell will disable their "Save As" dialogs.
[ "user-db:user system-db:local", "Prevent the user from saving files on disk disable-save-to-disk=true", "Lock this key to disable saving files on disk /org/gnome/desktop/lockdown/disable-save-to-disk", "dconf update" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/lockdown-file-saving
Chapter 147. Vert.x WebSocket
Chapter 147. Vert.x WebSocket Since Camel 3.5 Both producer and consumer are supported . The http://vertx.io/ Vertx] WebSocket component provides WebSocket capabilities as a WebSocket server, or as a client to connect to an existing WebSocket. 147.1. Dependencies When using vertx-websocket with Red Hat build of Camel Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-websocket-starter</artifactId> </dependency> 147.2. URI format 147.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 147.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 147.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 147.4. Component Options The Vert.x WebSocket component supports 11 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean allowOriginHeader (advanced) Whether the WebSocket client should add the Origin header to the WebSocket handshake request. true boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean defaultHost (advanced) Default value for host name that the WebSocket should bind to. 0.0.0.0 String defaultPort (advanced) Default value for the port that the WebSocket should bind to. 0 int originHeaderUrl (advanced) The value of the Origin header that the WebSocket client should use on the WebSocket handshake request. When not specified, the WebSocket client will automatically determine the value for the Origin from the request URL. String router (advanced) To provide a custom vertx router to use on the WebSocket server. Router vertx (advanced) To use an existing vertx instead of creating a new instance. Vertx vertxOptions (advanced) To provide a custom set of vertx options for configuring vertx. VertxOptions useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 147.5. Endpoint Options The Vert.x WebSocket endpoint is configured using URI syntax: with the following path and query parameters: 147.5.1. Path Parameters (3 parameters) Name Description Default Type host (common) Required WebSocket hostname, such as localhost or a remote host when in client mode. String port (common) Required WebSocket port number to use. int path (common) WebSocket path to use. String 147.5.2. Query Parameters (18 parameters) Name Description Default Type allowedOriginPattern (consumer) Regex pattern to match the origin header sent by WebSocket clients. String allowOriginHeader (consumer) Whether the WebSocket client should add the Origin header to the WebSocket handshake request. true boolean consumeAsClient (consumer) When set to true, the consumer acts as a WebSocket client, creating exchanges on each received WebSocket event. false boolean fireWebSocketConnectionEvents (consumer) Whether the server consumer will create a message exchange when a new WebSocket peer connects or disconnects. false boolean maxReconnectAttempts (consumer) When consumeAsClient is set to true this sets the maximum number of allowed reconnection attempts to a previously closed WebSocket. A value of 0 (the default) will attempt to reconnect indefinitely. 0 int originHeaderUrl (consumer) The value of the Origin header that the WebSocket client should use on the WebSocket handshake request. When not specified, the WebSocket client will automatically determine the value for the Origin from the request URL. String reconnectInitialDelay (consumer) When consumeAsClient is set to true this sets the initial delay in milliseconds before attempting to reconnect to a previously closed WebSocket. 0 int reconnectInterval (consumer) When consumeAsClient is set to true this sets the interval in milliseconds at which reconnecting to a previously closed WebSocket occurs. 1000 int router (consumer) To use an existing vertx router for the HTTP server. Router serverOptions (consumer) Sets customized options for configuring the HTTP server hosting the WebSocket for the consumer. HttpServerOptions bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern clientOptions (producer) Sets customized options for configuring the WebSocket client used in the producer. HttpClientOptions clientSubProtocols (producer) Comma separated list of WebSocket subprotocols that the client should use for the Sec-WebSocket-Protocol header. String sendToAll (producer) To send to all websocket subscribers. Can be used to configure at the endpoint level, instead of providing the VertxWebsocketConstants.SEND_TO_ALL header on the message. Note that when using this option, the host name specified for the vertx-websocket producer URI must match one used for an existing vertx-websocket consumer. Note that this option only applies when producing messages to endpoints hosted by the vertx-websocket consumer and not to an externally hosted WebSocket. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters 147.6. Message Headers The Vert.x WebSocket component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelVertxWebsocket.connectionKey (common) Constant: CONNECTION_KEY Sends the message to the client with the given connection key. You can use a comma separated list of keys to send a message to multiple clients. Note that this option only applies when producing messages to endpoints hosted by the vertx-websocket consumer and not to an externally hosted WebSocket. String CamelVertxWebsocket.sendToAll (producer) Constant: SEND_TO_ALL Sends the message to all clients which are currently connected. You can use the sendToAll option on the endpoint instead of using this header. Note that this option only applies when producing messages to endpoints hosted by the vertx-websocket consumer and not to an externally hosted WebSocket. boolean CamelVertxWebsocket.remoteAddress (consumer) Constant: REMOTE_ADDRESS The remote address. SocketAddress CamelVertxWebsocket.event (consumer) Constant: EVENT The WebSocket event that triggered the message exchange. Enum values: CLOSE ERROR MESSAGE OPEN VertxWebsocketEvent 147.7. Usage The following example shows how to expose a WebSocket on http://localhost:8080/echo and returns an 'echo' response back to the same channel: from("vertx-websocket:localhost:8080/echo") .transform().simple("Echo: USD{body}") .to("vertx-websocket:localhost:8080/echo"); It is also possible to configure the consumer to connect as a WebSocket client on a remote address with the consumeAsClient option: from("vertx-websocket:my.websocket.com:8080/chat?consumeAsClient=true") .log("Got WebSocket message USD{body}"); 147.8. Path and query parameters The WebSocket server consumer supports the configuration of parameterized paths. The path parameter value will be set as a Camel exchange header: from("vertx-websocket:localhost:8080/chat/{user}") .log("New message from USD{header.user} >>> USD{body}") You can also retrieve any query parameter values that were used by the WebSocket client to connect to the server endpoint: from("direct:sendChatMessage") .to("vertx-websocket:localhost:8080/chat/camel?role=admin"); from("vertx-websocket:localhost:8080/chat/{user}") .log("New message from USD{header.user} (USD{header.role}) >>> USD{body}") 147.9. Sending messages to peers connected to the vertx-websocket server consumer Note This section only applies when producing messages to a WebSocket hosted by the camel-vertx-websocket consumer. It is not relevant when producing messages to an externally hosted WebSocket. To send a message to all peers connected to a WebSocket hosted by the vertx-websocket server consumer, use the sendToAll=true endpoint option, or the CamelVertxWebsocket.sendToAll header. from("vertx-websocket:localhost:8080/chat") .log("Got WebSocket message USD{body}"); from("direct:broadcastMessage") .setBody().constant("This is a broadcast message!") .to("vertx-websocket:localhost:8080/chat?sendToAll=true"); Alternatively, you can send messages to specific peers by using the CamelVertxWebsocket.connectionKey header. Multiple peers can be specified as a comma separated list. The value of the connectionKey can be determined whenever a peer triggers an event on the vertx-websocket consumer, where a unique key identifying the peer will be propagated via the CamelVertxWebsocket.connectionKey header. from("vertx-websocket:localhost:8080/chat") .log("Got WebSocket message USD{body}"); from("direct:broadcastMessage") .setBody().constant("This is a broadcast message!") .setHeader(VertxWebsocketConstants.CONNECTION_KEY).constant("key-1,key-2,key-3") .to("vertx-websocket:localhost:8080/chat"); 147.10. SSL By default, the ws:// protocol is used, but secure connections with wss:// are supported by configuring the consumer or producer via the sslContextParameters URI parameter and the Camel JSSE Configuration Utility . 147.11. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.vertx-websocket.allow-origin-header Whether the WebSocket client should add the Origin header to the WebSocket handshake request. true Boolean camel.component.vertx-websocket.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.vertx-websocket.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.vertx-websocket.default-host Default value for host name that the WebSocket should bind to. 0.0.0.0 String camel.component.vertx-websocket.default-port Default value for the port that the WebSocket should bind to. 0 Integer camel.component.vertx-websocket.enabled Whether to enable auto configuration of the vertx-websocket component. This is enabled by default. Boolean camel.component.vertx-websocket.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.vertx-websocket.origin-header-url The value of the Origin header that the WebSocket client should use on the WebSocket handshake request. When not specified, the WebSocket client will automatically determine the value for the Origin from the request URL. String camel.component.vertx-websocket.router To provide a custom vertx router to use on the WebSocket server. The option is a io.vertx.ext.web.Router type. Router camel.component.vertx-websocket.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.vertx-websocket.vertx To use an existing vertx instead of creating a new instance. The option is a io.vertx.core.Vertx type. Vertx camel.component.vertx-websocket.vertx-options To provide a custom set of vertx options for configuring vertx. The option is a io.vertx.core.VertxOptions type. VertxOptions
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-websocket-starter</artifactId> </dependency>", "vertx-websocket://hostname[:port][/resourceUri][?options]", "vertx-websocket:host:port/path", "from(\"vertx-websocket:localhost:8080/echo\") .transform().simple(\"Echo: USD{body}\") .to(\"vertx-websocket:localhost:8080/echo\");", "from(\"vertx-websocket:my.websocket.com:8080/chat?consumeAsClient=true\") .log(\"Got WebSocket message USD{body}\");", "from(\"vertx-websocket:localhost:8080/chat/{user}\") .log(\"New message from USD{header.user} >>> USD{body}\")", "from(\"direct:sendChatMessage\") .to(\"vertx-websocket:localhost:8080/chat/camel?role=admin\"); from(\"vertx-websocket:localhost:8080/chat/{user}\") .log(\"New message from USD{header.user} (USD{header.role}) >>> USD{body}\")", "from(\"vertx-websocket:localhost:8080/chat\") .log(\"Got WebSocket message USD{body}\"); from(\"direct:broadcastMessage\") .setBody().constant(\"This is a broadcast message!\") .to(\"vertx-websocket:localhost:8080/chat?sendToAll=true\");", "from(\"vertx-websocket:localhost:8080/chat\") .log(\"Got WebSocket message USD{body}\"); from(\"direct:broadcastMessage\") .setBody().constant(\"This is a broadcast message!\") .setHeader(VertxWebsocketConstants.CONNECTION_KEY).constant(\"key-1,key-2,key-3\") .to(\"vertx-websocket:localhost:8080/chat\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-vertx-websocket-component-starter
3.6 Release Notes
3.6 Release Notes Red Hat Software Collections 3 Release Notes for Red Hat Software Collections 3.6 Lenka Spackova Red Hat Customer Content Services [email protected] Jaromir Hradilek Red Hat Customer Content Services [email protected] Eliska Slobodova Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.6_release_notes/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.14/making-open-source-more-inclusive
Chapter 2. Support policy for Eclipse Temurin
Chapter 2. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.25/rn-openjdk-temurin-support-policy
function::local_clock_us
function::local_clock_us Name function::local_clock_us - Number of microseconds on the local cpu's clock Synopsis Arguments None Description This function returns the number of microseconds on the local cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy).
[ "local_clock_us:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-local-clock-us
Chapter 3. Configuring the Date and Time
Chapter 3. Configuring the Date and Time Modern operating systems distinguish between the following two types of clocks: A real-time clock ( RTC ), commonly referred to as a hardware clock , (typically an integrated circuit on the system board) that is completely independent of the current state of the operating system and runs even when the computer is shut down. A system clock , also known as a software clock , that is maintained by the kernel and its initial value is based on the real-time clock. Once the system is booted and the system clock is initialized, the system clock is completely independent of the real-time clock. The system time is always kept in Coordinated Universal Time ( UTC ) and converted in applications to local time as needed. Local time is the actual time in your current time zone, taking into account daylight saving time ( DST ). The real-time clock can use either UTC or local time. UTC is recommended. Red Hat Enterprise Linux 7 offers three command line tools that can be used to configure and display information about the system date and time: The timedatectl utility, which is new in Red Hat Enterprise Linux 7 and is part of systemd . The traditional date command. The hwclock utility for accessing the hardware clock. 3.1. Using the timedatectl Command The timedatectl utility is distributed as part of the systemd system and service manager and allows you to review and change the configuration of the system clock. You can use this tool to change the current date and time, set the time zone, or enable automatic synchronization of the system clock with a remote server. For information on how to display the current date and time in a custom format, see also Section 3.2, "Using the date Command" . 3.1.1. Displaying the Current Date and Time To display the current date and time along with detailed information about the configuration of the system and hardware clock, run the timedatectl command with no additional command line options: This displays the local and universal time, the currently used time zone, the status of the Network Time Protocol ( NTP ) configuration, and additional information related to DST. Example 3.1. Displaying the Current Date and Time The following is an example output of the timedatectl command on a system that does not use NTP to synchronize the system clock with a remote server: Important Changes to the status of chrony or ntpd will not be immediately noticed by timedatectl . If changes to the configuration or status of these tools is made, enter the following command: 3.1.2. Changing the Current Time To change the current time, type the following at a shell prompt as root : Replace HH with an hour, MM with a minute, and SS with a second, all typed in two-digit form. This command updates both the system time and the hardware clock. The result it is similar to using both the date --set and hwclock --systohc commands. The command will fail if an NTP service is enabled. See Section 3.1.5, "Synchronizing the System Clock with a Remote Server" to temporally disable the service. Example 3.2. Changing the Current Time To change the current time to 11:26 p.m., run the following command as root : By default, the system is configured to use UTC. To configure your system to maintain the clock in the local time, run the timedatectl command with the set-local-rtc option as root : To configure your system to maintain the clock in the local time, replace boolean with yes (or, alternatively, y , true , t , or 1 ). To configure the system to use UTC, replace boolean with no (or, alternatively, n , false , f , or 0 ). The default option is no . 3.1.3. Changing the Current Date To change the current date, type the following at a shell prompt as root : Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month. Note that changing the date without specifying the current time results in setting the time to 00:00:00. Example 3.3. Changing the Current Date To change the current date to 2 June 2017 and keep the current time (11:26 p.m.), run the following command as root : 3.1.4. Changing the Time Zone To list all available time zones, type the following at a shell prompt: To change the currently used time zone, type as root : Replace time_zone with any of the values listed by the timedatectl list-timezones command. Example 3.4. Changing the Time Zone To identify which time zone is closest to your present location, use the timedatectl command with the list-timezones command line option. For example, to list all available time zones in Europe, type: To change the time zone to Europe/Prague , type as root : 3.1.5. Synchronizing the System Clock with a Remote Server As opposed to the manual adjustments described in the sections, the timedatectl command also allows you to enable automatic synchronization of your system clock with a group of remote servers using the NTP protocol. Enabling NTP enables the chronyd or ntpd service, depending on which of them is installed. The NTP service can be enabled and disabled using a command as follows: To enable your system to synchronize the system clock with a remote NTP server, replace boolean with yes (the default option). To disable this feature, replace boolean with no . Example 3.5. Synchronizing the System Clock with a Remote Server To enable automatic synchronization of the system clock with a remote server, type: The command will fail if an NTP service is not installed. See Section 18.3.1, "Installing chrony" for more information. 3.2. Using the date Command The date utility is available on all Linux systems and allows you to display and configure the current date and time. It is frequently used in scripts to display detailed information about the system clock in a custom format. For information on how to change the time zone or enable automatic synchronization of the system clock with a remote server, see Section 3.1, "Using the timedatectl Command" . 3.2.1. Displaying the Current Date and Time To display the current date and time, run the date command with no additional command line options: This displays the day of the week followed by the current date, local time, abbreviated time zone, and year. By default, the date command displays the local time. To display the time in UTC, run the command with the --utc or -u command line option: You can also customize the format of the displayed information by providing the +" format " option on the command line: Replace format with one or more supported control sequences as illustrated in Example 3.6, "Displaying the Current Date and Time" . See Table 3.1, "Commonly Used Control Sequences" for a list of the most frequently used formatting options, or the date (1) manual page for a complete list of these options. Table 3.1. Commonly Used Control Sequences Control Sequence Description %H The hour in the HH format (for example, 17 ). %M The minute in the MM format (for example, 30 ). %S The second in the SS format (for example, 24 ). %d The day of the month in the DD format (for example, 16 ). %m The month in the MM format (for example, 09 ). %Y The year in the YYYY format (for example, 2016 ). %Z The time zone abbreviation (for example, CEST ). %F The full date in the YYYY-MM-DD format (for example, 2016-09-16 ). This option is equal to %Y-%m-%d . %T The full time in the HH:MM:SS format (for example, 17:30:24). This option is equal to %H:%M:%S Example 3.6. Displaying the Current Date and Time To display the current date and local time, type the following at a shell prompt: To display the current date and time in UTC, type the following at a shell prompt: To customize the output of the date command, type: 3.2.2. Changing the Current Time To change the current time, run the date command with the --set or -s option as root : Replace HH with an hour, MM with a minute, and SS with a second, all typed in two-digit form. By default, the date command sets the system clock to the local time. To set the system clock in UTC, run the command with the --utc or -u command line option: Example 3.7. Changing the Current Time To change the current time to 11:26 p.m., run the following command as root : 3.2.3. Changing the Current Date To change the current date, run the date command with the --set or -s option as root : Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month. Note that changing the date without specifying the current time results in setting the time to 00:00:00. Example 3.8. Changing the Current Date To change the current date to 2 June 2017 and keep the current time (11:26 p.m.), run the following command as root : 3.3. Using the hwclock Command hwclock is a utility for accessing the hardware clock, also referred to as the Real Time Clock (RTC). The hardware clock is independent of the operating system you use and works even when the machine is shut down. This utility is used for displaying the time from the hardware clock. hwclock also contains facilities for compensating for systematic drift in the hardware clock. The hardware clock stores the values of: year, month, day, hour, minute, and second. It is not able to store the time standard, local time or Coordinated Universal Time (UTC), nor set the Daylight Saving Time (DST). The hwclock utility saves its settings in the /etc/adjtime file, which is created with the first change you make, for example, when you set the time manually or synchronize the hardware clock with the system time. Note For the changes in the hwclock behaviour between Red Hat Enterprise Linux version 6 and 7, see Red Hat Enterprise Linux 7 Migration Planning Guide guide. 3.3.1. Displaying the Current Date and Time Running hwclock with no command line options as the root user returns the date and time in local time to standard output. Note that using the --utc or --localtime options with the hwclock command does not mean you are displaying the hardware clock time in UTC or local time. These options are used for setting the hardware clock to keep time in either of them. The time is always displayed in local time. Additionally, using the hwclock --utc or hwclock --local commands does not change the record in the /etc/adjtime file. This command can be useful when you know that the setting saved in /etc/adjtime is incorrect but you do not want to change the setting. On the other hand, you may receive misleading information if you use the command an incorrect way. See the hwclock (8) manual page for more details. Example 3.9. Displaying the Current Date and Time To display the current date and the current local time from the hardware clock, run as root : CEST is a time zone abbreviation and stands for Central European Summer Time. For information on how to change the time zone, see Section 3.1.4, "Changing the Time Zone" . 3.3.2. Setting the Date and Time Besides displaying the date and time, you can manually set the hardware clock to a specific time. When you need to change the hardware clock date and time, you can do so by appending the --set and --date options along with your specification: Replace dd with a day (a two-digit number), mmm with a month (a three-letter abbreviation), yyyy with a year (a four-digit number), HH with an hour (a two-digit number), MM with a minute (a two-digit number). At the same time, you can also set the hardware clock to keep the time in either UTC or local time by adding the --utc or --localtime options, respectively. In this case, UTC or LOCAL is recorded in the /etc/adjtime file. Example 3.10. Setting the Hardware Clock to a Specific Date and Time If you want to set the date and time to a specific value, for example, to "21:17, October 21, 2016", and keep the hardware clock in UTC, run the command as root in the following format: 3.3.3. Synchronizing the Date and Time You can synchronize the hardware clock and the current system time in both directions. Either you can set the hardware clock to the current system time by using this command: Note that if you use NTP, the hardware clock is automatically synchronized to the system clock every 11 minutes, and this command is useful only at boot time to get a reasonable initial system time. Or, you can set the system time from the hardware clock by using the following command: When you synchronize the hardware clock and the system time, you can also specify whether you want to keep the hardware clock in local time or UTC by adding the --utc or --localtime option. Similarly to using --set , UTC or LOCAL is recorded in the /etc/adjtime file. The hwclock --systohc --utc command is functionally similar to timedatectl set-local-rtc false and the hwclock --systohc --local command is an alternative to timedatectl set-local-rtc true . Example 3.11. Synchronizing the Hardware Clock with System Time To set the hardware clock to the current system time and keep the hardware clock in local time, run the following command as root : To avoid problems with time zone and DST switching, it is recommended to keep the hardware clock in UTC. The shown Example 3.11, "Synchronizing the Hardware Clock with System Time" is useful, for example, in case of a multi boot with a Windows system, which assumes the hardware clock runs in local time by default, and all other systems need to accommodate to it by using local time as well. It may also be needed with a virtual machine; if the virtual hardware clock provided by the host is running in local time, the guest system needs to be configured to use local time, too. 3.4. Additional Resources For more information on how to configure the date and time in Red Hat Enterprise Linux 7, see the resources listed below. Installed Documentation timedatectl (1) - The manual page for the timedatectl command line utility documents how to use this tool to query and change the system clock and its settings. date (1) - The manual page for the date command provides a complete list of supported command line options. hwclock (8) - The manual page for the hwclock command provides a complete list of supported command line options. See Also Chapter 2, System Locale and Keyboard Configuration documents how to configure the keyboard layout. Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 10, Managing Services with systemd provides more information on systemd and documents how to use the systemctl command to manage system services.
[ "timedatectl", "~]USD timedatectl Local time: Mon 2016-09-16 19:30:24 CEST Universal time: Mon 2016-09-16 17:30:24 UTC Timezone: Europe/Prague (CEST, +0200) NTP enabled: no NTP synchronized: no RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2016-03-31 01:59:59 CET Sun 2016-03-31 03:00:00 CEST Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2016-10-27 02:59:59 CEST Sun 2016-10-27 02:00:00 CET", "~]# systemctl restart systemd-timedated.service", "timedatectl set-time HH:MM:SS", "~]# timedatectl set-time 23:26:00", "timedatectl set-local-rtc boolean", "timedatectl set-time YYYY-MM-DD", "~]# timedatectl set-time \"2017-06-02 23:26:00\"", "timedatectl list-timezones", "timedatectl set-timezone time_zone", "~]# timedatectl list-timezones | grep Europe Europe/Amsterdam Europe/Andorra Europe/Athens Europe/Belgrade Europe/Berlin Europe/Bratislava ...", "~]# timedatectl set-timezone Europe/Prague", "timedatectl set-ntp boolean", "~]# timedatectl set-ntp yes", "date", "date --utc", "date +\"format\"", "~]USD date Mon Sep 16 17:30:24 CEST 2016", "~]USD date --utc Mon Sep 16 15:30:34 UTC 2016", "~]USD date +\"%Y-%m-%d %H:%M\" 2016-09-16 17:30", "date --set HH:MM:SS", "date --set HH:MM:SS --utc", "~]# date --set 23:26:00", "date --set YYYY-MM-DD", "~]# date --set \"2017-06-02 23:26:00\"", "hwclock", "~]# hwclock Tue 15 Apr 2017 04:23:46 PM CEST -0.329272 seconds", "hwclock --set --date \"dd mmm yyyy HH:MM\"", "~]# hwclock --set --date \"21 Oct 2016 21:17\" --utc", "hwclock --systohc", "hwclock --hctosys", "~]# hwclock --systohc --localtime" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/chap-Configuring_the_Date_and_Time
probe::signal.pending.return
probe::signal.pending.return Name probe::signal.pending.return - Examination of pending signal completed Synopsis Values retstr Return value as a string name Name of the probe point
[ "signal.pending.return" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-pending-return
Chapter 8. Troubleshooting
Chapter 8. Troubleshooting The following chapter describes what happens when SELinux denies access; the top three causes of problems; where to find information about correct labeling; analyzing SELinux denials; and creating custom policy modules with audit2allow . 8.1. What Happens when Access is Denied SELinux decisions, such as allowing or disallowing access, are cached. This cache is known as the Access Vector Cache (AVC). Denial messages are logged when SELinux denies access. These denials are also known as "AVC denials", and are logged to a different location, depending on which daemons are running: Daemon Log Location auditd on /var/log/audit/audit.log auditd off; rsyslogd on /var/log/messages setroubleshootd, rsyslogd, and auditd on /var/log/audit/audit.log . Easier-to-read denial messages also sent to /var/log/messages If you are running the X Window System, have the setroubleshoot and setroubleshoot-server packages installed, and the setroubleshootd and auditd daemons are running, a warning is displayed when access is denied by SELinux: Clicking on 'Show' presents a detailed analysis of why SELinux denied access, and a possible solution for allowing access. If you are not running the X Window System, it is less obvious when access is denied by SELinux. For example, users browsing your website may receive an error similar to the following: For these situations, if DAC rules (standard Linux permissions) allow access, check /var/log/messages and /var/log/audit/audit.log for "SELinux is preventing" and "denied" errors respectively. This can be done by running the following commands as the Linux root user:
[ "Forbidden You don't have permission to access file name on this server", "~]# grep \"SELinux is preventing\" /var/log/messages", "~]# grep \"denied\" /var/log/audit/audit.log" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-security-enhanced_linux-troubleshooting
Chapter 92. Pragmatic AI
Chapter 92. Pragmatic AI When you think about artificial intelligence (AI), machine learning and big data might come to mind. But machine learning is only part of the picture. Artificial intelligence includes the following technologies: Robotics: The integration of technology, science, and engineering that produces machines that can perform physical tasks that are performed by humans Machine learning: The ability of a collection of algorithms to learn or improve when exposed to data without being explicitly programmed to do so Natural language processing: A subset of machine learning that processes human speech Mathematical optimization: The use of conditions and constraints to find the optimal solution to a problem Digital decisioning: The use of defined criteria, conditions, and a series of machine and human tasks to make decisions While science fiction is filled with what is referred to as artificial general intelligence (AGI), machines that perform better than people and cannot be distinguished from them and learn and evolve without human intervention or control, AGI is decades away. Meanwhile, we have pragmatic AI which is much less frightening and much more useful to us today. Pragmatic AI is a collection of AI technologies that, when combined, provide solutions to problems such as predicting customer behavior, providing automated customer service, and helping customers make purchasing decisions. Leading industry analysts report that previously organizations have struggled with AI technologies because they invested in the potential of AI rather than the reality of what AI can deliver today. AI projects were not productive and as a result investment in AI slowed and budgets for AI projects were reduced. This disillusionment with AI is often referred to as an AI winter. AI has experienced several cycles of AI winters followed by AI springs and we are now decidedly in an AI spring. Organizations are seeing the practical reality of what AI can deliver. Being pragmatic means being practical and realistic. A pragmatic approach to AI considers AI technologies that are available today, combines them where useful, and adds human intervention when needed to create solutions to real-world problems. Pragmatic AI solution example One application of pragmatic AI is in customer support. A customer files a support ticket that reports a problem, for example, a login error. A machine learning algorithm is applied to the ticket to match the ticket content with existing solutions, based on keywords or natural language processing (NLP). The keywords might appear in many solutions, some relevant and some not as relevant. You can use digital decisioning to determine which solutions to present to the customer. However, sometimes none of the solutions proposed by the algorithm are appropriate to propose to the customer. This can be because all solutions have a low confidence score or multiple solutions have a high confidence score. In cases where an appropriate solution cannot be found, the digital decisioning can involve the human support team. To find the best support person based on availability and expertise, mathematical optimization selects the best assignee for the support ticket by considering employee rostering constraints. As this example shows, you can combine machine learning to extract information from data analysis and digital decisioning to model human knowledge and experience. You can then apply mathematical optimization to schedule human assistance. This is a pattern that you can apply to other situations, for example, a credit card dispute and credit card fraud detection. These technologies use four industry standards: Case Management Model and Notation (CMMN) CMMN is used to model work methods that include various activities that might be performed in an unpredictable order depending on circumstances. CMMN models are event centered. CMMN overcomes limitations of what can be modeled with BPMN2 by supporting less structured work tasks and tasks driven by humans. By combining BPMN and CMMN you can create much more powerful models. Business Process Model and Notation (BPMN2) The BPMN2 specification is an Object Management Group (OMG) specification that defines standards for graphically representing a business process, defines execution semantics for the elements, and provides process definitions in XML format. BPMN2 can model computer and human tasks. Decision Model and Notation (DMN) Decision Model and Notation (DMN) is a standard established by the OMG for describing and modeling operational decisions. DMN defines an XML schema that enables DMN models to be shared between DMN-compliant platforms and across organizations so that business analysts and business rules developers can collaborate in designing and implementing DMN decision services. The DMN standard is similar to and can be used together with the Business Process Model and Notation (BPMN) standard for designing and modeling business processes. Predictive Model Markup Language (PMML) PMML is the language used to represent predictive models, mathematical models that use statistical techniques to uncover, or learn, patterns in large volumes of data. Predictive models use the patterns that they learn to predict the existence of patterns in new data. With PMML, you can share predictive models between applications. This data is exported as a PMML file that can be consumed by a DMN model. As a machine learning framework continues to train the model, the updated data can be saved to the existing PMML file. This means that you can use predictive models created by any application that can save the model as a PMML file. Therefore, DMN and PMML integrate well. Putting it all together This illustration shows how predictive decision automation works. Business data enters the system, for example, data from a loan application. A decision model that is integrated with a predictive model decides whether or not to approve the loan or whether additional tasks are required. A business action results, for example, a rejection letter or loan offer is sent to the customer. The section demonstrates how predictive decision automation works with Red Hat Process Automation Manager.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/ai-con_artificial-intelligence
Chapter 2. Understanding API compatibility guidelines
Chapter 2. Understanding API compatibility guidelines Important This guidance does not cover layered OpenShift Container Platform offerings. 2.1. API compatibility guidelines Red Hat recommends that application developers adopt the following principles in order to improve compatibility with OpenShift Container Platform: Use APIs and components with support tiers that match the application's need. Build applications using the published client libraries where possible. Applications are only guaranteed to run correctly if they execute in an environment that is as new as the environment it was built to execute against. An application that was built for OpenShift Container Platform 4.14 is not guaranteed to function properly on OpenShift Container Platform 4.13. Do not design applications that rely on configuration files provided by system packages or other components. These files can change between versions unless the upstream community is explicitly committed to preserving them. Where appropriate, depend on any Red Hat provided interface abstraction over those configuration files in order to maintain forward compatibility. Direct file system modification of configuration files is discouraged, and users are strongly encouraged to integrate with an Operator provided API where available to avoid dual-writer conflicts. Do not depend on API fields prefixed with unsupported<FieldName> or annotations that are not explicitly mentioned in product documentation. Do not depend on components with shorter compatibility guarantees than your application. Do not perform direct storage operations on the etcd server. All etcd access must be performed via the api-server or through documented backup and restore procedures. Red Hat recommends that application developers follow the compatibility guidelines defined by Red Hat Enterprise Linux (RHEL). OpenShift Container Platform strongly recommends the following guidelines when building an application or hosting an application on the platform: Do not depend on a specific Linux kernel or OpenShift Container Platform version. Avoid reading from proc , sys , and debug file systems, or any other pseudo file system. Avoid using ioctls to directly interact with hardware. Avoid direct interaction with cgroups in order to not conflict with OpenShift Container Platform host-agents that provide the container execution environment. Note During the lifecycle of a release, Red Hat makes commercially reasonable efforts to maintain API and application operating environment (AOE) compatibility across all minor releases and z-stream releases. If necessary, Red Hat might make exceptions to this compatibility goal for critical impact security or other significant issues. 2.2. API compatibility exceptions The following are exceptions to compatibility in OpenShift Container Platform: RHEL CoreOS file system modifications not made with a supported Operator No assurances are made at this time that a modification made to the host operating file system is preserved across minor releases except for where that modification is made through the public interface exposed via a supported Operator, such as the Machine Config Operator or Node Tuning Operator. Modifications to cluster infrastructure in cloud or virtualized environments No assurances are made at this time that a modification to the cloud hosting environment that supports the cluster is preserved except for where that modification is made through a public interface exposed in the product or is documented as a supported configuration. Cluster infrastructure providers are responsible for preserving their cloud or virtualized infrastructure except for where they delegate that authority to the product through an API. Functional defaults between an upgraded cluster and a new installation No assurances are made at this time that a new installation of a product minor release will have the same functional defaults as a version of the product that was installed with a prior minor release and upgraded to the equivalent version. For example, future versions of the product may provision cloud infrastructure with different defaults than prior minor versions. In addition, different default security choices may be made in future versions of the product than those made in past versions of the product. Past versions of the product will forward upgrade, but preserve legacy choices where appropriate specifically to maintain backwards compatibility. Usage of API fields that have the prefix "unsupported" or undocumented annotations Select APIs in the product expose fields with the prefix unsupported<FieldName> . No assurances are made at this time that usage of this field is supported across releases or within a release. Product support can request a customer to specify a value in this field when debugging specific problems, but its usage is not supported outside of that interaction. Usage of annotations on objects that are not explicitly documented are not assured support across minor releases. API availability per product installation topology The OpenShift distribution will continue to evolve its supported installation topology, and not all APIs in one install topology will necessarily be included in another. For example, certain topologies may restrict read/write access to particular APIs if they are in conflict with the product installation topology or not include a particular API at all if not pertinent to that topology. APIs that exist in a given topology will be supported in accordance with the compatibility tiers defined above. 2.3. API compatibility common terminology 2.3.1. Application Programming Interface (API) An API is a public interface implemented by a software program that enables it to interact with other software. In OpenShift Container Platform, the API is served from a centralized API server and is used as the hub for all system interaction. 2.3.2. Application Operating Environment (AOE) An AOE is the integrated environment that executes the end-user application program. The AOE is a containerized environment that provides isolation from the host operating system (OS). At a minimum, AOE allows the application to run in an isolated manner from the host OS libraries and binaries, but still share the same OS kernel as all other containers on the host. The AOE is enforced at runtime and it describes the interface between an application and its operating environment. It includes intersection points between the platform, operating system and environment, with the user application including projection of downward API, DNS, resource accounting, device access, platform workload identity, isolation among containers, isolation between containers and host OS. The AOE does not include components that might vary by installation, such as Container Network Interface (CNI) plugin selection or extensions to the product such as admission hooks. Components that integrate with the cluster at a level below the container environment might be subjected to additional variation between versions. 2.3.3. Compatibility in a virtualized environment Virtual environments emulate bare-metal environments such that unprivileged applications that run on bare-metal environments will run, unmodified, in corresponding virtual environments. Virtual environments present simplified abstracted views of physical resources, so some differences might exist. 2.3.4. Compatibility in a cloud environment OpenShift Container Platform might choose to offer integration points with a hosting cloud environment via cloud provider specific integrations. The compatibility of these integration points are specific to the guarantee provided by the native cloud vendor and its intersection with the OpenShift Container Platform compatibility window. Where OpenShift Container Platform provides an integration with a cloud environment natively as part of the default installation, Red Hat develops against stable cloud API endpoints to provide commercially reasonable support with forward looking compatibility that includes stable deprecation policies. Example areas of integration between the cloud provider and OpenShift Container Platform include, but are not limited to, dynamic volume provisioning, service load balancer integration, pod workload identity, dynamic management of compute, and infrastructure provisioned as part of initial installation. 2.3.5. Major, minor, and z-stream releases A Red Hat major release represents a significant step in the development of a product. Minor releases appear more frequently within the scope of a major release and represent deprecation boundaries that might impact future application compatibility. A z-stream release is an update to a minor release which provides a stream of continuous fixes to an associated minor release. API and AOE compatibility is never broken in a z-stream release except when this policy is explicitly overridden in order to respond to an unforeseen security impact. For example, in the release 4.13.2: 4 is the major release version 13 is the minor release version 2 is the z-stream release version 2.3.6. Extended user support (EUS) A minor release in an OpenShift Container Platform major release that has an extended support window for critical bug fixes. Users are able to migrate between EUS releases by incrementally adopting minor versions between EUS releases. It is important to note that the deprecation policy is defined across minor releases and not EUS releases. As a result, an EUS user might have to respond to a deprecation when migrating to a future EUS while sequentially upgrading through each minor release. 2.3.7. Developer Preview An optional product capability that is not officially supported by Red Hat, but is intended to provide a mechanism to explore early phase technology. By default, Developer Preview functionality is opt-in, and subject to removal at any time. Enabling a Developer Preview feature might render a cluster unsupportable dependent upon the scope of the feature. If you are a Red( )Hat customer or partner and have feedback about these developer preview versions, file an issue by using the OpenShift Bugs tracker . Do not use the formal Red( )Hat support service ticket process. You can read more about support handling in the following knowledge article . 2.3.8. Technology Preview An optional product capability that provides early access to upcoming product innovations to test functionality and provide feedback during the development process. The feature is not fully supported, might not be functionally complete, and is not intended for production use. Usage of a Technology Preview function requires explicit opt-in. Learn more about the Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/api_overview/compatibility-guidelines
Chapter 1. Content and patch management with Red Hat Satellite
Chapter 1. Content and patch management with Red Hat Satellite With Red Hat Satellite, you can provide content and apply patches to hosts systematically in all lifecycle stages. 1.1. Content flow in Red Hat Satellite Content flow in Red Hat Satellite involves management and distribution of content from external sources to hosts. Content in Satellite flows from external content sources to Satellite Server . Capsule Servers mirror the content from Satellite Server to hosts . External content sources You can configure many content sources with Satellite. The supported content sources include the Red Hat Customer Portal, Git repositories, Ansible collections, Docker Hub, SCAP repositories, or internal data stores of your organization. Satellite Server On your Satellite Server, you plan and manage the content lifecycle. Capsule Servers By creating Capsule Servers, you can establish content sources in various locations based on your needs. For example, you can establish a content source for each geographical location or multiple content sources for a data center with separate networks. Hosts By assigning a host system to a Capsule Server or directly to your Satellite Server, you ensure the host receives the content they provide. Hosts can be physical or virtual. Additional resources See Chapter 4, Major Satellite components for details. See Managing Red Hat subscriptions in Managing content for information about Content Delivery Network (CDN). 1.2. Content views in Red Hat Satellite A content view is a deliberately curated subset of content that your hosts can access. By creating a content view, you can define the software versions used by a particular environment or Capsule Server. Each content view creates a set of repositories across each environment. Your Satellite Server stores and manages these repositories. For example, you can create content views in the following ways: A content view with older package versions for a production environment and another content view with newer package versions for a Development environment. A content view with a package repository required by an operating system and another content view with a package repository required by an application. A composite content view for a modular approach to managing content views. For example, you can use one content view for content for managing an operating system and another content view for content for managing an application. By creating a composite content view that combines both content views, you create a new repository that merges the repositories from each of the content views. However, the repositories for the content views still exist and you can keep managing them separately as well. Default Organization View A Default Organization View is an application-controlled content view for all content that is synchronized to Satellite. You can register a host to the Library environment on Satellite to consume the Default Organization View without configuring content views and lifecycle environments. Promoting a content view across environments When you promote a content view from one environment to the environment in the application lifecycle, Satellite updates the repository and publishes the packages. Example 1.1. Promoting a package from Development to Testing The repositories for Testing and Production contain the my-software -1.0-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 1 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm my-software -1.0-0.noarch.rpm If you promote Version 2 of the content view from Development to Testing , the repository for Testing updates to contain the my-software -1.1-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 2 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm This ensures hosts are designated to a specific environment but receive updates when that environment uses a new version of the content view. Additional resources For more information, see Managing content views in Managing content . 1.3. Content types in Red Hat Satellite With Red Hat Satellite, you can import and manage many content types. For example, Satellite supports the following content types: RPM packages Import RPM packages from repositories related to your Red Hat subscriptions. Satellite Server downloads the RPM packages from the Red Hat Content Delivery Network and stores them locally. You can use these repositories and their RPM packages in content views. Kickstart trees Import the Kickstart trees to provision a host. New systems access these Kickstart trees over a network to use as base content for their installation. Red Hat Satellite contains predefined Kickstart templates. You can also create your own Kickstart templates. ISO and KVM images Download and manage media for installation and provisioning. For example, Satellite downloads, stores, and manages ISO images and guest images for specific Red Hat Enterprise Linux and non-Red Hat operating systems. Custom file type Manage custom content for any type of file you require, such as SSL certificates, ISO images, and OVAL files. 1.4. Additional resources For information about how to manage content with Satellite, see Managing content .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/Content-and-Patch-Management-with-Satellite_planning
Chapter 3. Administering GFS2 file systems
Chapter 3. Administering GFS2 file systems There are a variety of commands and options that you use to create, mount, grow, and manage GFS2 file systems. 3.1. GFS2 file system creation You create a GFS2 file system with the mkfs.gfs2 command. A file system is created on an activated LVM volume. 3.1.1. The GFS2 mkfs command The following information is required to run the mkfs.gfs2 command to create a clustered GFS2 file system: Lock protocol/module name, which is lock_dlm for a cluster Cluster name Number of journals (one journal required for each node that may be mounting the file system) Note Once you have created a GFS2 file system with the mkfs.gfs2 command, you cannot decrease the size of the file system. You can, however, increase the size of an existing file system with the gfs2_grow command. The format for creating a clustered GFS2 file system is as follows. Note that Red Hat does not support the use of GFS2 as a single-node file system. If you prefer, you can create a GFS2 file system by using the mkfs command with the -t parameter specifying a file system of type gfs2 , followed by the GFS2 file system options. Warning Improperly specifying the ClusterName:FSName parameter may cause file system or lock space corruption. ClusterName The name of the cluster for which the GFS2 file system is being created. FSName The file system name, which can be 1 to 16 characters long. The name must be unique for all lock_dlm file systems over the cluster. NumberJournals Specifies the number of journals to be created by the mkfs.gfs2 command. One journal is required for each node that mounts the file system. For GFS2 file systems, more journals can be added later without growing the file system. BlockDevice Specifies a logical or other block device The following table describes the mkfs.gfs2 command options (flags and parameters). Table 3.1. Command Options: mkfs.gfs2 Flag Parameter Description -c Megabytes Sets the initial size of each journal's quota change file to Megabytes . -D Enables debugging output. -h Help. Displays available options. -J Megabytes Specifies the size of the journal in megabytes. Default journal size is 128 megabytes. The minimum size is 8 megabytes. Larger journals improve performance, although they use more memory than smaller journals. -j Number Specifies the number of journals to be created by the mkfs.gfs2 command. One journal is required for each node that mounts the file system. If this option is not specified, one journal will be created. For GFS2 file systems, you can add additional journals at a later time without growing the file system. -O Prevents the mkfs.gfs2 command from asking for confirmation before writing the file system. -p LockProtoName * Specifies the name of the locking protocol to use. Recognized locking protocols include: * lock_dlm - The standard locking module, required for a clustered file system. * lock_nolock - Used when GFS2 is acting as a local file system (one node only). Red Hat does not support the use of GFS2 as a single-node file system in a production environment. lock_nolock should be used only for the purposes of backup or for a secondary-site Disaster Recovery node, as described in Minimum cluster size . When using lock_nolock , you must ensure that the GFS2 file system is being used by only one system at a time. -q Quiet. Do not display anything. -r Megabytes Specifies the size of the resource groups in megabytes. The minimum resource group size is 32 megabytes. The maximum resource group size is 2048 megabytes. A large resource group size may increase performance on very large file systems. If this is not specified, mkfs.gfs2 chooses the resource group size based on the size of the file system: average size file systems will have 256 megabyte resource groups, and bigger file systems will have bigger resource groups for better performance. -t LockTableName * A unique identifier that specifies the lock table field when you use the lock_dlm protocol; the lock_nolock protocol does not use this parameter. * This parameter has two parts separated by a colon (no spaces) as follows: ClusterName:FSName . * ClusterName is the name of the cluster for which the GFS2 file system is being created; only members of this cluster are permitted to use this file system. * FSName , the file system name, can be 1 to 16 characters in length, and the name must be unique among all file systems in the cluster. -V Displays command version information. 3.1.2. Creating a GFS2 file system The following example creates two GFS2 file systems. For both of these file systems, lock_dlm` is the locking protocol that the file system uses, since this is a clustered file system. Both file systems can be used in the cluster named alpha . For the first file system, file system name is mydata1 . it contains eight journals and is created on /dev/vg01/lvol0 . For the second file system, the file system name is mydata2 . It contains eight journals and is created on /dev/vg01/lvol1 . 3.2. Mounting a GFS2 file system Before you can mount a GFS2 file system, the file system must exist, the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. After those requirements have been met, you can mount the GFS2 file system as you would any Linux file system. Note You should always use Pacemaker to manage the GFS2 file system in a production environment rather than manually mounting the file system with a mount command, as this may cause issues at system shutdown. To manipulate file ACLs, you must mount the file system with the -o acl mount option. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl ), but are not allowed to set them (with setfacl ). 3.2.1. Mounting a GFS2 file system with no options specified In this example, the GFS2 file system on /dev/vg01/lvol0 is mounted on the /mygfs2 directory. 3.2.2. Mounting a GFS2 file system that specifies mount options The following is the format for the command to mount a GFS2 file system that specifies mount options. BlockDevice Specifies the block device where the GFS2 file system resides. MountPoint Specifies the directory where the GFS2 file system should be mounted. The -o option argument consists of GFS2-specific options or acceptable standard Linux mount -o options, or a combination of both. Multiple option parameters are separated by a comma and no spaces. Note The mount command is a Linux system command. In addition to using these GFS2-specific options, you can use other, standard, mount command options (for example, -r ). For information about other Linux mount command options, see the Linux mount man page. The following table describes the available GFS2-specific -o option values that can be passed to GFS2 at mount time. Note This table includes descriptions of options that are used with local file systems only. Note, however, that Red Hat does not support the use of GFS2 as a single-node file system. Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes). Table 3.2. GFS2-Specific Mount Options Option Description acl Allows manipulating file ACLs. If a file system is mounted without the acl mount option, users are allowed to view ACLs (with getfacl ), but are not allowed to set them (with setfacl ). data=[ordered|writeback] When data=ordered is set, the user data modified by a transaction is flushed to the disk before the transaction is committed to disk. This should prevent the user from seeing uninitialized blocks in a file after a crash. When data=writeback mode is set, the user data is written to the disk at any time after it is dirtied; this does not provide the same consistency guarantee as ordered mode, but it should be slightly faster for some workloads. The default value is ordered mode. ignore_local_fs Caution: This option should not be used when GFS2 file systems are shared. Forces GFS2 to treat the file system as a multi-host file system. By default, using lock_nolock automatically turns on the localflocks flag. localflocks Caution: This option should not be used when GFS2 file systems are shared. Tells GFS2 to let the VFS (virtual file system) layer do all flock and fcntl. The localflocks flag is automatically turned on by lock_nolock . lockproto= LockModuleName Allows the user to specify which locking protocol to use with the file system. If LockModuleName is not specified, the locking protocol name is read from the file system superblock. locktable= LockTableName Allows the user to specify which locking table to use with the file system. quota=[off/account/on] Turns quotas on or off for a file system. Setting the quotas to be in the account state causes the per UID/GID usage statistics to be correctly maintained by the file system; limit and warn values are ignored. The default value is off . errors=panic|withdraw When errors=panic is specified, file system errors will cause a kernel panic. When errors=withdraw is specified, which is the default behavior, file system errors will cause the system to withdraw from the file system and make it inaccessible until the reboot; in some cases the system may remain running. discard/nodiscard Causes GFS2 to generate "discard" I/O requests for blocks that have been freed. These can be used by suitable hardware to implement thin provisioning and similar schemes. barrier/nobarrier Causes GFS2 to send I/O barriers when flushing the journal. The default value is on . This option is automatically turned off if the underlying device does not support I/O barriers. Use of I/O barriers with GFS2 is highly recommended at all times unless the block device is designed so that it cannot lose its write cache content (for example, if it is on a UPS or it does not have a write cache). quota_quantum= secs Sets the number of seconds for which a change in the quota information may sit on one node before being written to the quota file. This is the preferred way to set this parameter. The value is an integer number of seconds greater than zero. The default is 60 seconds. Shorter settings result in faster updates of the lazy quota information and less likelihood of someone exceeding their quota. Longer settings make file system operations involving quotas faster and more efficient. statfs_quantum= secs Setting statfs_quantum to 0 is the preferred way to set the slow version of statfs . The default value is 30 secs which sets the maximum time period before statfs changes will be synced to the master statfs file. This can be adjusted to allow for faster, less accurate statfs values or slower more accurate values. When this option is set to 0, statfs will always report the true values. statfs_percent= value Provides a bound on the maximum percentage change in the statfs information about a local basis before it is synced back to the master statfs file, even if the time period has not expired. If the setting of statfs_quantum is 0, then this setting is ignored. 3.2.3. Unmounting a GFS2 file system GFS2 file systems that have been mounted manually rather than automatically through Pacemaker will not be known to the system when file systems are unmounted at system shutdown. As a result, the GFS2 resource agent will not unmount the GFS2 file system. After the GFS2 resource agent is shut down, the standard shutdown process kills off all remaining user processes, including the cluster infrastructure, and tries to unmount the file system. This unmount will fail without the cluster infrastructure and the system will hang. To prevent the system from hanging when the GFS2 file systems are unmounted, you should do one of the following: Always use Pacemaker to manage the GFS2 file system. If a GFS2 file system has been mounted manually with the mount command, be sure to unmount the file system manually with the umount command before rebooting or shutting down the system. If your file system hangs while it is being unmounted during system shutdown under these circumstances, perform a hardware reboot. It is unlikely that any data will be lost since the file system is synced earlier in the shutdown process. The GFS2 file system can be unmounted the same way as any Linux file system, by using the umount command. Note The umount command is a Linux system command. Information about this command can be found in the Linux umount command man pages. Usage MountPoint Specifies the directory where the GFS2 file system is currently mounted. 3.3. Backing up a GFS2 file system It is important to make regular backups of your GFS2 file system in case of emergency, regardless of the size of your file system. Many system administrators feel safe because they are protected by RAID, multipath, mirroring, snapshots, and other forms of redundancy, but there is no such thing as safe enough. It can be a problem to create a backup since the process of backing up a node or set of nodes usually involves reading the entire file system in sequence. If this is done from a single node, that node will retain all the information in cache until other nodes in the cluster start requesting locks. Running this type of backup program while the cluster is in operation will negatively impact performance. Dropping the caches once the backup is complete reduces the time required by other nodes to regain ownership of their cluster locks and caches. This is still not ideal, however, because the other nodes will have stopped caching the data that they were caching before the backup process began. You can drop caches using the following command after the backup is complete: It is faster if each node in the cluster backs up its own files so that the task is split between the nodes. You might be able to accomplish this with a script that uses the rsync command on node-specific directories. Red Hat recommends making a GFS2 backup by creating a hardware snapshot on the SAN, presenting the snapshot to another system, and backing it up there. The backup system should mount the snapshot with -o lockproto=lock_nolock since it will not be in a cluster. Note, however, that Red Hat does not support the use of GFS2 as a single-node file system in a production environment. This option should be used only for the purposes of backup or for a secondary-site Disaster Recovery node, as described in Minimum cluster size . When using this option, you must ensure that the GFS2 file system is being used by only one system at a time. 3.4. Suspending activity on a GFS2 file system You can suspend write activity to a file system by using the dmsetup suspend command. Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. The dmsetup resume command ends the suspension. The format for the command to suspend activity on a GFS2 file system is as follows. This example suspends writes to file system /mygfs2 . The format for the command to end suspension of activity on a GFS2 file system is as follows. This example ends suspension of writes to file system /mygfs2 . 3.5. Growing a GFS2 file system The gfs2_grow command is used to expand a GFS2 file system after the device where the file system resides has been expanded. Running the gfs2_grow command on an existing GFS2 file system fills all spare space between the current end of the file system and the end of the device with a newly initialized GFS2 file system extension. All nodes in the cluster can then use the extra storage space that has been added. Note You cannot decrease the size of a GFS2 file system. The gfs2_grow command must be run on a mounted file system. The following procedure increases the size of the GFS2 file system in a cluster that is mounted on the logical volume shared_vg/shared_lv1 with a mount point of /mnt/gfs2 . Procedure Perform a backup of the data on the file system. If you do not know the logical volume that is used by the file system to be expanded, you can determine this by running the df mountpoint command. This will display the device name in the following format: /dev/mapper/ vg - lv For example, the device name /dev/mapper/shared_vg-shared_lv1 indicates that the logical volume is shared_vg/shared_lv1 . On one node of the cluster, expand the underlying cluster volume with the lvextend command. One one node of the cluster, increase the size of the GFS2 file system. Do not extend the file system if the logical volume was not refreshed on all of the nodes, otherwise the file system data may become unavailable throughout the cluster. Run the df command on all nodes to check that the new space is now available in the file system. Note that it may take up to 30 seconds for the df command on all nodes to show the same file system size 3.6. Adding journals to a GFS2 file system GFS2 requires one journal for each node in a cluster that needs to mount the file system. If you add additional nodes to the cluster, you can add journals to a GFS2 file system with the gfs2_jadd command. You can add journals to a GFS2 file system dynamically at any point without expanding the underlying logical volume. The gfs2_jadd command must be run on a mounted file system, but it needs to be run on only one node in the cluster. All the other nodes sense that the expansion has occurred. Note If a GFS2 file system is full, the gfs2_jadd command will fail, even if the logical volume containing the file system has been extended and is larger than the file system. This is because in a GFS2 file system, journals are plain files rather than embedded metadata, so simply extending the underlying logical volume will not provide space for the journals. Before adding journals to a GFS2 file system, you can find out how many journals the GFS2 file system currently contains with the gfs2_edit -p jindex command, as in the following example: The format for the basic command to add journals to a GFS2 file system is as follows. Number Specifies the number of new journals to be added. MountPoint Specifies the directory where the GFS2 file system is mounted. In this example, one journal is added to the file system on the /mygfs2 directory.
[ "mkfs.gfs2 -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice", "mkfs -t gfs2 -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice", "mkfs.gfs2 -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0 mkfs.gfs2 -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1", "mount /dev/vg01/lvol0 /mygfs2", "mount BlockDevice MountPoint -o option", "umount MountPoint", "echo -n 3 > /proc/sys/vm/drop_caches", "dmsetup suspend MountPoint", "dmsetup suspend /mygfs2", "dmsetup resume MountPoint", "dmsetup resume /mygfs2", "lvextend -L+1G shared_vg/shared_lv1 Size of logical volume shared_vg/shared_lv1 changed from 5.00 GiB (1280 extents) to 6.00 GiB (1536 extents). WARNING: extending LV with a shared lock, other hosts may require LV refresh. Logical volume shared_vg/shared_lv1 successfully resized.", "gfs2_grow /mnt/gfs2 FS: Mount point: /mnt/gfs2 FS: Device: /dev/mapper/shared_vg-shared_lv1 FS: Size: 1310719 (0x13ffff) DEV: Length: 1572864 (0x180000) The file system will grow by 1024MB. gfs2_grow complete.", "df -h /mnt/gfs2 ] Filesystem Size Used Avail Use% Mounted on /dev/mapper/shared_vg-shared_lv1 6.0G 4.5G 1.6G 75% /mnt/gfs2", "gfs2_edit -p jindex /dev/sasdrives/scratch|grep journal 3/3 [fc7745eb] 4/25 (0x4/0x19): File journal0 4/4 [8b70757d] 5/32859 (0x5/0x805b): File journal1 5/5 [127924c7] 6/65701 (0x6/0x100a5): File journal2", "gfs2_jadd -j Number MountPoint", "gfs2_jadd -j 1 /mygfs2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_gfs2_file_systems/assembly_creating-mounting-gfs2-configuring-gfs2-file-systems
Chapter 5. Set Up Logging
Chapter 5. Set Up Logging 5.1. About Logging Red Hat JBoss Data Grid provides highly configurable logging facilities for both its own internal use and for use by deployed applications. The logging subsystem is based on JBoss LogManager and it supports several third party application logging frameworks in addition to JBoss Logging. The logging subsystem is configured using a system of log categories and log handlers. Log categories define what messages to capture, and log handlers define how to deal with those messages (write to disk, send to console, etc). After a JBoss Data Grid cache is configured with operations such as eviction and expiration, logging tracks relevant activity (including errors or failures). When set up correctly, logging provides a detailed account of what occurred in the environment and when. Logging also helps track activity that occurred just before a crash or problem in the environment. This information is useful when troubleshooting or when attempting to identify the source of a crash or error. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-Set_Up_Logging
Chapter 6. Using PXE to Provision Hosts
Chapter 6. Using PXE to Provision Hosts You can provision bare metal instances with Satellite using one of the following methods: Unattended Provisioning New hosts are identified by a MAC address and Satellite Server provisions the host using a PXE boot process. Unattended Provisioning with Discovery New hosts use PXE boot to load the Satellite Discovery service. This service identifies hardware information about the host and lists it as an available host to provision. For more information, see Chapter 7, Configuring the Discovery Service . PXE-less Provisioning New hosts are provisioned with a boot disk image that Satellite Server generates. BIOS and UEFI Support With Red Hat Satellite, you can perform both BIOS and UEFI based PXE provisioning. Both BIOS and UEFI interfaces work as interpreters between the computer's operating system and firmware, initializing the hardware components and starting the operating system at boot time. For information about supported workflows, see Supported architectures and provisioning scenarios . In Satellite provisioning, the PXE loader option defines the DHCP filename option to use during provisioning. For BIOS systems, use the PXELinux BIOS option to enable a provisioned node to download the pxelinux.0 file over TFTP. For UEFI systems, use the PXEGrub2 UEFI option to enable a TFTP client to download grub2/grubx64.efi file, or use the PXEGrub2 UEFI HTTP option to enable an UEFI HTTP client to download grubx64.efi from Capsule with the HTTP Boot feature. For BIOS provisioning, you must associate a PXELinux template with the operating system. For UEFI provisioning, you must associate a PXEGrub2 template with the operating system. If you associate both PXELinux and PXEGrub2 templates, Satellite can deploy configuration files for both on a TFTP server, so that you can switch between PXE loaders easily. 6.1. Prerequisites for Bare Metal Provisioning The requirements for bare metal provisioning include: A Capsule Server managing the network for bare metal hosts. For unattended provisioning and discovery-based provisioning, Satellite Server requires PXE server settings. For more information about networking requirements, see Chapter 3, Configuring Networking . For more information about the Discovery service, Chapter 7, Configuring the Discovery Service . A bare metal host or a blank VM. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in the Content Management Guide . Provide an activation key for host registration. For more information, see Creating An Activation Key in the Content Management guide. For information about the security token for unattended and PXE-less provisioning, see Section 6.2, "Configuring the Security Token Validity Duration" . 6.2. Configuring the Security Token Validity Duration When performing any kind of provisioning, as a security measure, Satellite automatically generates a unique token and adds this token to the kickstart URL in the PXE configuration file (PXELinux, Grub2). By default, the token is valid for 360 minutes. When you provision a host, ensure that you reboot the host within this time frame. If the token expires, it is no longer valid and you receive a 404 error and the operating system installer download fails. Procedure In the Satellite web UI, navigate to Administer > Settings , and click the Provisioning tab. Find the Token duration option and click the edit icon and edit the duration, or enter 0 to disable token generation. If token generation is disabled, an attacker can spoof client IP address and download kickstart from Satellite Server, including the encrypted root password. 6.3. Creating Hosts with Unattended Provisioning Unattended provisioning is the simplest form of host provisioning. You enter the host details on Satellite Server and boot your host. Satellite Server automatically manages the PXE configuration, organizes networking services, and provides the operating system and configuration for the host. This method of provisioning hosts uses minimal interaction during the process. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs and change the context to match your requirements. From the Host Group list, select a host group that you want to use to populate the form. Click the Interface tab, and on the host's interface, click Edit . Verify that the fields are populated with values. Note in particular: The Name from the Host tab becomes the DNS name . Satellite Server automatically assigns an IP address for the new host. In the MAC address field, enter a MAC address for the host. This ensures the identification of the host during the PXE boot process. Ensure that Satellite Server automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system. Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use. For more information about associating provisioning templates, see Section 2.11, "Provisioning Templates" . Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host details. For more information about network interfaces, see Adding network interfaces . This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for PXE booting the bare metal host. If you start the physical host and set its boot mode to PXE, the host detects the DHCP service of Satellite Server's integrated Capsule, receives HTTP endpoint of the Kickstart tree and installs the operating system. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. CLI procedure Create the host with the hammer host create command: Ensure the network interface options are set using the hammer host interface update command: 6.4. Creating Hosts with PXE-less Provisioning Some hardware does not provide a PXE boot interface. In Satellite, you can provision a host without PXE boot. This is also known as PXE-less provisioning and involves generating a boot ISO that hosts can use. Using this ISO, the host can connect to Satellite Server, boot the installation media, and install the operating system. Satellite also provides a PXE-less discovery service that operates without PXE-based services, such as DHCP and TFTP. For more information, see Section 7.7, "Implementing PXE-less Discovery" . Boot ISO Types There are the following types of boot ISOs: Full host image A boot ISO that contains the kernel and initial RAM disk image for the specific host. This image is useful if the host fails to chainload correctly. The provisioning template still downloads from Satellite Server. Subnet image A boot ISO that is not associated with a specific host. The ISO sends the host's MAC address to Capsule Server, which matches it against the host entry. The image does not store IP address details and requires access to a DHCP server on the network to bootstrap. This image is generic to all hosts with a provisioning NIC on the same subnet. The image is based on iPXE boot firmware, only a limited number of network cards is supported. Note The Full host image is based on SYSLINUX and Grub and works with most network cards. When using a Subnet image , see supported hardware on ipxe.org for a list of network card drivers expected to work with an iPXE-based boot disk. Full host image contains a provisioning token, therefore the generated image has limited lifespan. For more information about configuring security tokens, read Section 6.2, "Configuring the Security Token Validity Duration" . To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name that you want to become the provisioned system's host name. Click the Organization and Location tabs and change the context to match your requirements. From the Host Group list, select a host group that you want to use to populate the form. Click the Interface tab, and on the host's interface, click Edit . Verify that the fields are populated with values. Note in particular: The Name from the Host tab becomes the DNS name . Satellite Server automatically assigns an IP address for the new host. In the MAC address field, enter a MAC address for the host. Ensure that Satellite Server automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system. Click Resolve in Provisioning Templates to check the new host can identify the right provisioning templates to use. For more information about associating provisioning templates, see Section 2.11, "Provisioning Templates" . Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host details. This creates a host entry and the host details page appears. Download the boot disk from Satellite Server. For Full host image , on the host details page, click the vertical elipsis and select Full host ' My_Host_Name ' image . For Subnet image , navigate to Infrastructure > Subnets , click the dropdown menu in the Actions column of the required subnet and select Subnet generic image . Write the ISO to a USB storage device using the dd utility or livecd-tools if required. When you start the host and boot from the ISO or the USB storage device, the host connects to Satellite Server and starts installing operating system from its kickstart tree. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. CLI procedure Create the host using the hammer host create command. Ensure that your network interface options are set using the hammer host interface update command. Download the boot disk from Satellite Server using the hammer bootdisk command: For Full host image : For Subnet image : This creates a boot ISO for your host to use. Write the ISO to a USB storage device using the dd utility or livecd-tools if required. When you start the physical host and boot from the ISO or the USB storage device, the host connects to Satellite Server and starts installing operating system from its kickstart tree. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. 6.5. Creating Hosts with UEFI HTTP Boot Provisioning You can provision hosts from Satellite using the UEFI HTTP Boot. This is the only method with which you can provision hosts in IPv6 network. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you meet the requirements for HTTP booting. For more information, see HTTP Booting Requirements in Planning for Satellite . Procedure On Capsule that you use for provisioning, update the grub2-efi package to the latest version: Enable foreman-proxy-http , foreman-proxy-httpboot , and foreman-proxy-tftp features. In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs and change the context to match your requirements. From the Host Group list, select a host group that you want to use to populate the form. Click the Interface tab, and on the host's interface, click Edit . Verify that the fields are populated with values. Note in particular: The Name from the Host tab becomes the DNS name . Satellite Server automatically assigns an IP address for the new host. In the MAC address field, enter a MAC address of the host's provisioning interface. This ensures the identification of the host during the PXE boot process. Ensure that Satellite Server automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system. From the PXE Loader list, select Grub2 UEFI HTTP . Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use. For more information about associating provisioning templates, see Section 2.13, "Creating Provisioning Templates" . Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host details. For more information about network interfaces, see Adding network interfaces . Set the host to boot in UEFI mode from network. Start the host. From the boot menu, select Kickstart default PXEGrub2 . This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of Capsule with the Kickstart tree and installs the operating system. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. CLI procedure On Capsule that you use for provisioning, update the grub2-efi package to the latest version: Enable foreman-proxy-http , foreman-proxy-httpboot , and foreman-proxy-tftp true features: Create the host with the hammer host create command. Ensure the network interface options are set using the hammer host interface update command: Set the host to boot in UEFI mode from network. Start the host. From the boot menu, select Kickstart default PXEGrub2 . This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of Capsule with the Kickstart tree and installs the operating system. When the installation completes, the host also registers to Satellite Server using the activation key and installs the necessary configuration and management tools from the Satellite Client 6 repository. 6.6. Deploying SSH Keys During Provisioning Use this procedure to deploy SSH keys added to a user during provisioning. For information on adding SSH keys to a user, see Managing SSH Keys for a User in the Administering Red Hat Satellite guide. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates . Create a provisioning template, or clone and edit an existing template. For more information, see Section 2.13, "Creating Provisioning Templates" . In the template, click the Template tab. In the Template editor field, add the create_users snippet to the %post section: Select the Default checkbox. Click the Association tab. From the Application Operating Systems list, select an operating system. Click Submit to save the provisioning template. Create a host that is associated with the provisioning template or rebuild a host using the OS associated with the modified template. For more information, see Creating a Host in the Managing Hosts guide. The SSH keys of the Owned by user are added automatically when the create_users snippet is executed during the provisioning process. You can set Owned by to an individual user or a user group. If you set Owned by to a user group, the SSH keys of all users in the user group are added automatically.
[ "hammer host create --name \" My_Unattended_Host \" --organization \" My_Organization \" --location \" My_Location \" --hostgroup \" My_Host_Group \" --mac \" aa:aa:aa:aa:aa:aa \" --build true --enabled true --managed true", "hammer host interface update --host \"test1\" --managed true --primary true --provision true", "hammer host create --name \" My_Host_Name \" --organization \" My_Organization \" --location \" My_Location \" --hostgroup \" My_Host_Group \" --mac \" aa:aa:aa:aa:aa:aa \" --build true --enabled true --managed true", "hammer host interface update --host \" My_Host_Name \" --managed true --primary true --provision true", "hammer bootdisk host --host My_Host_Name.example.com --full true", "hammer bootdisk subnet --subnet My_Subnet_Name", "satellite-maintain packages update grub2-efi", "satellite-installer --scenario satellite --foreman-proxy-http true --foreman-proxy-httpboot true --foreman-proxy-tftp true", "satellite-maintain packages update grub2-efi", "satellite-installer --scenario satellite --foreman-proxy-http true --foreman-proxy-httpboot true --foreman-proxy-tftp true", "hammer host create --name \" My_Host \" --build true --enabled true --hostgroup \" My_Host_Group \" --location \" My_Location \" --mac \" aa:aa:aa:aa:aa:aa \" --managed true --organization \" My_Organization \" --pxe-loader \"Grub2 UEFI HTTP\"", "hammer host interface update --host \" My_Host \" --managed true --primary true --provision true", "<%= snippet('create_users') %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/Using_PXE_to_Provision_Hosts_provisioning
Chapter 22. Installation Phase 2: Configuring Language and Installation Source
Chapter 22. Installation Phase 2: Configuring Language and Installation Source Before the graphical installation program starts, you need to configure the language and installation source. By default, if you are installing interactively (with the default parameter file generic.prm ) the loader program to select language and installation source starts in text mode. In your new ssh session, the following message is displayed: 22.1. Non-interactive Line-Mode Installation If the cmdline option was specified as boot option in your parameter file ( Section 26.6, "Parameters for Kickstart Installations" ) or in your kickstart file (refer to Section 32.3, "Creating the Kickstart File" , the loader starts up with line-mode oriented text output. In this mode, all necessary information must be provided in the kickstart file. The installer does not allow user interaction and stops if there is unspecified installation information.
[ "Welcome to the anaconda install environment 1.2 for zSeries" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-Installation_Phase_2-s390
Appendix A. Component Versions
Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7 release. Table A.1. Component Versions Component Version kernel 3.10.0-1127 kernel-alt 4.14.0-115 QLogic qla2xxx driver 10.01.00.20.07.8-k QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:12.0.0.13 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.874-17 DM-Multipath ( device-mapper-multipath ) 0.4.9-131 LVM ( lvm2 ) 2.02.186-7 qemu-kvm [a] 1.5.3-173 qemu-kvm-ma [b] 2.12.0-33 [a] The qemu-kvm packages provide KVM virtualization on AMD64 and Intel 64 systems. [b] The qemu-kvm-ma packages provide KVM virtualization on IBM POWER8, IBM POWER9, and IBM Z. Note that KVM virtualization on IBM POWER9 and IBM Z also requires using the kernel-alt packages.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.8_release_notes/component_versions
7.93. json-c
7.93. json-c 7.93.1. RHBA-2015:1397 - json-c bug fix update Updated json-c packages that fix one bug are now available for Red Hat Enterprise Linux 6. JSON-C implements a reference counting object model that allows users to easily construct JavaScript Object Notation (JSON) objects in C, output them as JSON formatted strings, and parse JSON formatted strings back into the C representation of JSON objects. Bug Fix BZ# 1158842 The pkg-config (.pc) files for JSON-C were incorrectly placed in the /lib64/pkgconfig/ directory in the 64-bit packages and in the /lib/pkgconfig/ directory in the 32-bit packages. Consequently, the pkg-config tool was unable to find these files and failed to provide the location of the installed JSON-C libraries, header files, and other information about JSON-C. With this update, the pkg-config files have been moved to the /usr/lib64/pkgconfig/ and /usr/lib/pkgconfig/ directory respectively. As a result, the pkg-config tool now successfully returns information about the installed JSON-C packages. Users of JSON-C are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-json-c
Chapter 1. Overview of nodes
Chapter 1. Overview of nodes 1.1. About nodes A node is a virtual or bare-metal machine in a Kubernetes cluster. Worker nodes host your application containers, grouped as pods. The control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Dedicated, the control plane nodes contain more than just the Kubernetes services for managing the OpenShift Dedicated cluster. Having stable and healthy nodes in a cluster is fundamental to the smooth functioning of your hosted application. In OpenShift Dedicated, you can access, manage, and monitor a node through the Node object representing the node. Using the OpenShift CLI ( oc ) or the web console, you can perform the following operations on a node. The following components of a node are responsible for maintaining the running of pods and providing the Kubernetes runtime environment. Container runtime The container runtime is responsible for running containers. Kubernetes offers several runtimes such as containerd, cri-o, rktlet, and Docker. Kubelet Kubelet runs on nodes and reads the container manifests. It ensures that the defined containers have started and are running. The kubelet process maintains the state of work and the node server. Kubelet manages network rules and port forwarding. The kubelet manages containers that are created by Kubernetes only. Kube-proxy Kube-proxy runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. A Kube-proxy ensures that the networking environment is isolated and accessible. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. Read operations The read operations allow an administrator or a developer to get information about nodes in an OpenShift Dedicated cluster. List all the nodes in a cluster . Get information about a node, such as memory and CPU usage, health, status, and age. List pods running on a node . Enhancement operations OpenShift Dedicated allows you to do more than just access and manage nodes; as an administrator, you can perform the following tasks on nodes to make the cluster more efficient, application-friendly, and to provide a better environment for your developers. Manage node-level tuning for high-performance applications that require some level of kernel tuning by using the Node Tuning Operator . Run background tasks on nodes automatically with daemon sets . You can create and use daemon sets to create shared storage, run a logging pod on every node, or deploy a monitoring agent on all nodes. 1.2. About pods A pod is one or more containers deployed together on a node. As a cluster administrator, you can define a pod, assign it to run on a healthy node that is ready for scheduling, and manage. A pod runs as long as the containers are running. You cannot change a pod once it is defined and is running. Some operations you can perform when working with pods are: Read operations As an administrator, you can get information about pods in a project through the following tasks: List pods associated with a project , including information such as the number of replicas and restarts, current status, and age. View pod usage statistics such as CPU, memory, and storage consumption. Management operations The following list of tasks provides an overview of how an administrator can manage pods in an OpenShift Dedicated cluster. Control scheduling of pods using the advanced scheduling features available in OpenShift Dedicated: Node-to-pod binding rules such as pod affinity , node affinity , and anti-affinity . Node labels and selectors . Pod topology spread constraints . Configure how pods behave after a restart using pod controllers and restart policies . Limit both egress and ingress traffic on a pod . Add and remove volumes to and from any object that has a pod template . A volume is a mounted file system available to all the containers in a pod. Container storage is ephemeral; you can use volumes to persist container data. Enhancement operations You can work with pods more easily and efficiently with the help of various tools and features available in OpenShift Dedicated. The following operations involve using those tools and features to better manage pods. Secrets: Some applications need sensitive information, such as passwords and usernames. An administrator can use the Secret object to provide sensitive data to pods using the Secret object . 1.3. About containers A container is the basic unit of an OpenShift Dedicated application, which comprises the application code packaged along with its dependencies, libraries, and binaries. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Linux container technologies are lightweight mechanisms for isolating running processes and limiting access to only designated resources. As an administrator, You can perform various tasks on a Linux container, such as: Copy files to and from a container . Allow containers to consume API objects . Execute remote commands in a container . Use port forwarding to access applications in a container . OpenShift Dedicated provides specialized containers called Init containers . Init containers run before application containers and can contain utilities or setup scripts not present in an application image. You can use an Init container to perform tasks before the rest of a pod is deployed. Apart from performing specific tasks on nodes, pods, and containers, you can work with the overall OpenShift Dedicated cluster to keep the cluster efficient and the application pods highly available. 1.4. Glossary of common terms for OpenShift Dedicated nodes This glossary defines common terms that are used in the node content. Container It is a lightweight and executable image that comprises software and all its dependencies. Containers virtualize the operating system, as a result, you can run containers anywhere from a data center to a public or private cloud to even a developer's laptop. Daemon set Ensures that a replica of the pod runs on eligible nodes in an OpenShift Dedicated cluster. egress The process of data sharing externally through a network's outbound traffic from a pod. garbage collection The process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Ingress Incoming traffic to a pod. Job A process that runs to completion. A job creates one or more pod objects and ensures that the specified pods are successfully completed. Labels You can use labels, which are key-value pairs, to organise and select subsets of objects, such as a pod. Node A worker machine in the OpenShift Dedicated cluster. A node can be either be a virtual machine (VM) or a physical machine. Node Tuning Operator You can use the Node Tuning Operator to manage node-level tuning by using the TuneD daemon. It ensures custom tuning specifications are passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Self Node Remediation Operator The Operator runs on the cluster nodes and identifies and reboots nodes that are unhealthy. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Dedicated cluster. A pod is the smallest compute unit defined, deployed, and managed. Toleration Indicates that the pod is allowed (but not required) to be scheduled on nodes or node groups with matching taints. You can use tolerations to enable the scheduler to schedule pods with matching taints. Taint A core object that comprises a key, value, and effect. Taints and tolerations work together to ensure that pods are not scheduled on irrelevant nodes.
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/nodes/overview-of-nodes