title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
2.3. Built-in Command-Line Tools | 2.3. Built-in Command-Line Tools Red Hat Enterprise Linux 7 provides several tools that can be used to monitor your system from the command line, allowing you to monitor your system outside run level 5. This chapter discusses each tool briefly and provides links to further information about where each tool should be used, and how to use them. 2.3.1. top The top tool, provided by the procps-ng package, gives a dynamic view of the processes in a running system. It can display a variety of information, including a system summary and a list of tasks currently being managed by the Linux kernel. It also has a limited ability to manipulate processes, and to make configuration changes persistent across system restarts. By default, the processes displayed are ordered according to the percentage of CPU usage, so that you can easily see the processes consuming the most resources. Both the information top displays and its operation are highly configurable to allow you to concentrate on different usage statistics as required. For detailed information about using top, see the man page: 2.3.2. ps The ps tool, provided by the procps-ng package, takes a snapshot of a select group of active processes. By default, the group examined is limited to processes that are owned by the current user and associated with the terminal in which ps is run. ps can provide more detailed information about processes than top, but by default it provides a single snapshot of this data, ordered by process identifier. For detailed information about using ps, see the man page: 2.3.3. Virtual Memory Statistics (vmstat) The Virtual Memory Statistics tool, vmstat, provides instant reports on your system's processes, memory, paging, block input/output, interrupts, and CPU activity. Vmstat lets you set a sampling interval so that you can observe system activity in near-real time. vmstat is provided by the procps-ng package. For detailed information about using vmstat, see the man page: 2.3.4. System Activity Reporter (sar) The System Activity Reporter, sar, collects and reports information about system activity that has occurred so far on the current day. The default output displays the current day's CPU usage at 10 minute intervals from the beginning of the day (00:00:00 according to your system clock). You can also use the -i option to set the interval time in seconds, for example, sar -i 60 tells sar to check CPU usage every minute. sar is a useful alternative to manually creating periodic reports on system activity with top. It is provided by the sysstat package. For detailed information about using sar, see the man page: | [
"man top",
"man ps",
"man vmstat",
"man sar"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-built_in_command_line_tools |
Chapter 40. Case roles | Chapter 40. Case roles Case roles provide an additional layer of abstraction for user participation in case handling. Roles, users, and groups are used for different purposes in case management. Roles Roles drive the authorization for a case instance and are used for user activity assignments. A user or one or more groups can be assigned to the owner role. The owner is whoever the case belongs to. Roles are not restricted to a single set of people or groups as part of a case definition. Use roles to specify task assignments instead of assigning a specific user or group to a task assignment to ensure that the case remains dynamic. Groups A group is a collection of users who are able to carry out a particular task or have a set of specified responsibilities. You can assign any number of people to a group and assign any group to a role. You can add or change members of a group at any time. Do not hard code a group to a particular task. Users A user is an individual who can be given a particular task when you assign them a role or add them to a group. Note Do not create a user called unknown in process engine or KIE Server. The unknown user account is a reserved system name with superuser access. The unknown user account performs tasks related to the SLA violation listener when there are no users logged in. The following example illustrates how the preceding case management concepts apply to a hotel reservation with the following information: Role : Guest Group : Receptionist , Maid User : Marilyn The Guest role assignment affects the specific work of the associated case and is unique to all case instances. Every case instance will have its own role assignments. The number of users or groups that can be assigned to a role is limited by the case Cardinality , which is set during role creation in the process designer and case definition. For example, the hotel reservation case has only one guest while the IT_Orders sample project has two suppliers of IT hardware. When roles are defined, ensure that roles are not hard-coded to a single set of people or groups as part of case definition and that they can differ for each case instance. This is why case role assignments are important. Role assignments can be assigned or removed when a case starts or at any time when a case is active. Although roles are optional, use roles in case definitions to maintain an organized workflow. Important Always use roles for task assignments instead of actual user or group names. This ensures that the case and user or group assignments can be made as late as required. Roles are assigned to users or groups and authorized to perform tasks when a case instance is started. 40.1. Creating case roles You can create and define case roles in the case definition when you design the case in the process designer. Case roles are configured on the case definition level to keep them separate from the actors involved in handling the case instance. Roles can be assigned to user tasks or used as contact references throughout the case lifecycle, but they are not defined in the case as a specific user or group of users. Case instances include the individuals that are actually handling the case work. Assign roles when starting a new case instance. In order to keep cases flexible, you can modify case role assignment during case run time, although doing this has no effect on tasks already created based on the role assignment. The actor assigned to a role is flexible but the role itself remains the same for each case. Prerequisites A case project that has a case definition exists in Business Central. The case definition asset is open in the process designer. Procedure To define the roles involved in the case, click on an empty space in the editor's canvas, and click to open the Properties menu. Expand Case Management to add a case role. The case role requires a name for the role and a case cardinality. Case cardinality is the number of actors that are assigned to the role in any case instance. For example, the IT_Orders sample case management project includes the following roles: Figure 40.1. ITOrders Case Roles In this example, you can assign only one actor (a user or a group) as the case owner and assign only one actor to the manager role. The supplier role can have two actors assigned. Depending on the case, you can assign any number of actors to a particular role based on the configured case cardinality of the role. 40.2. Role authorization Roles are authorized to perform specific case management tasks when starting a new case instance using the Showcase application or the REST API. Use the following procedure to start a new IT Orders case using the REST API. Prerequisites The IT_Orders sample project has been imported in Business Central and deployed to KIE Server. Procedure Create a POST REST API call with the following endpoint: http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances itorders : The container alias that has been deployed to KIE Server. itorders.orderhardware : The name of the case definition. Provide the following role configuration in the request body: { "case-data" : { }, "case-user-assignments" : { "owner" : "cami", "manager" : "cami" }, "case-group-assignments" : { "supplier" : "IT" } } This starts a new case with defined roles, as well as autostart activities, which are started and ready to be worked on. Two of the roles are user assignments ( owner and manager ) and the third is a group assignment ( supplier ). After the case instance is successfully started, the case instance returns the IT-0000000001 case ID. For information about how to start a new case instance using the Showcase application, see Using the Showcase application for case management . 40.3. Assigning a task to a role Case management processes need to be as flexible as possible to accommodate changes that can happen dynamically during run time. This includes changing user assignments for new case instances or for active cases. For this reason, ensure that you do not hard code roles to a single set of users or groups in the case definition. Instead, role assignments can be defined on the task nodes in the case definition, with users or groups assigned to the roles on case creation. Red Hat Process Automation Manager contains a predefined selection of node types to simplify business process creation. The predefined node panel is located on the left side of the diagram editor. Prerequisites A case definition has been created with case roles configured at the case definition level. For more information about creating case roles, see Section 40.1, "Creating case roles" . Procedure Open the Activities menu in the designer palette and drag the user or service task that you want to add to your case definition onto the process designer canvas. With the task node selected, click to open the Properties panel on the right side of the designer. Expand Implementation/Execution , click Add below the Actors property and either select or type the name of the role to which the task will be assigned. You can use the Groups property in the same way for group assignments. For example, in the IT_Orders sample project, the Manager approval user task is assigned to the manager role: In this example, after the Prepare hardware spec user task has been completed, the user assigned to the manager role will receive the Manager approval task in their Task Inbox in Business Central. The user assigned to the role can be changed during the case run time, but the task itself continues to have the same role assignment. For example, the person originally assigned to the manager role might need to take time off (if they become ill, for example), or they might unexpectedly leave the company. To respond to this change in circumstances, you can edit the manager role assignment so that someone else can be assigned the tasks associated with that role. For information about how to change role assignments during case run time, see Section 40.4, "Modifying case role assignments during run time using Showcase" or Section 40.5, "Modifying case role assignments during run time using REST API" . 40.4. Modifying case role assignments during run time using Showcase You can change case instance role assignments during case run time using the Showcase application. Roles are defined in the case definition and assigned to tasks in the case lifecycle. Roles cannot change during run time because they are predefined, but you can change the actors assigned to the roles to change who is responsible for carrying out case tasks. Prerequisites An active case instance with users or groups is already assigned to at least one case role. Procedure In the Showcase application, click the case you want to work on in the Case list to open the case overview. Locate the role assignment that you want to change in the Roles box in the lower-right corner of the page. To remove a single user or group from the role assignment, click the to the assignment. In the confirmation window, click Remove to remove the user or group from the role. To remove all role assignments from a role, click the to the role and select the Remove all assignments option. In the confirmation window, click Remove to remove all user and group assignments from the role. To change the role assignment from one user or group to another, click the to the role and select the Edit option. In the Edit role assignment window, delete the name of the assignee that you want to remove from the role assignment. Type the name of the user you want to assign to the role into the User field or the group you want to assign in the Group field. At least one user or group must be assigned when editing a role assignment. Click Assign to complete the role assignment. 40.5. Modifying case role assignments during run time using REST API You can change case instance role assignments during case run time using the REST API or Swagger application. Roles are defined in the case definition and assigned to tasks in the case life cycle. Roles cannot change during run time because they are predefined, but you can change the actors assigned to the roles to change who is responsible for carrying out case tasks. The following procedure includes examples based on the IT_Orders sample project. You can use the same REST API endpoints in the Swagger application or any other REST API client, or using Curl. Prerequisites An IT Orders case instance has been started with owner , manager , and supplier roles already assigned to actors. Procedure Retrieve the list of current role assignments using a GET request on the following endpoint: http://localhost:8080/kie-server/services/rest/server/containers/{id}/cases/instances/{caseId}/roles Table 40.1. Parameters Name Description id itorders caseId IT-0000000001 This returns the following response: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> <users>Katy</users> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list> To change the user assigned to the manager role, you must first remove the role assignment from the user Katy using DELETE . /server/containers/{id}/cases/instances/{caseId}/roles/{caseRoleName} Include the following information in the Swagger client request: Table 40.2. Parameters Name Description id itorders caseId IT-0000000001 caseRoleName manager user Katy Click Execute . Execute the GET request from the first step again to check that the manager role no longer has a user assigned: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list> Assign the user Cami to the manager role using a PUT request on the following endpoint: /server/containers/{id}/cases/instances/{caseId}/roles/{caseRoleName} Include the following information in the Swagger client request: Table 40.3. Parameters Name Description id itorders caseId IT-0000000001 caseRoleName manager user Cami Click Execute . Execute the GET request from the first step again to check that the manager role is now assigned to Cami : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> <users>Cami</users> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list> | [
"{ \"case-data\" : { }, \"case-user-assignments\" : { \"owner\" : \"cami\", \"manager\" : \"cami\" }, \"case-group-assignments\" : { \"supplier\" : \"IT\" } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> <users>Katy</users> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list>",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list>",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <case-role-assignment-list> <role-assignments> <name>owner</name> <users>Aimee</users> </role-assignments> <role-assignments> <name>manager</name> <users>Cami</users> </role-assignments> <role-assignments> <name>supplier</name> <groups>Lenovo</groups> </role-assignments> </case-role-assignment-list>"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/case-management-roles-con-case-management-design |
Disconnected installation mirroring | Disconnected installation mirroring OpenShift Container Platform 4.16 Mirroring the installation container images Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/disconnected_installation_mirroring/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_spring_boot_starter/making-open-source-more-inclusive_datagrid |
Chapter 5. Managing Users and Roles | Chapter 5. Managing Users and Roles A User defines a set of details for individuals using the system. Users can be associated with organizations and environments, so that when they create new entities, the default settings are automatically used. Users can also have one or more roles attached, which grants them rights to view and manage organizations and environments. See Section 5.1, "User Management" for more information on working with users. You can manage permissions of several users at once by organizing them into user groups. User groups themselves can be further grouped to create a hierarchy of permissions. For more information on creating user groups, see Section 5.4, "Creating and Managing User Groups" . Roles define a set of permissions and access levels. Each role contains one on more permission filters that specify the actions allowed for the role. Actions are grouped according to the Resource type . Once a role has been created, users and user groups can be associated with that role. This way, you can assign the same set of permissions to large groups of users. Satellite provides a set of predefined roles and also enables creating custom roles and permission filters as described in Section 5.5, "Creating and Managing Roles" . 5.1. User Management As an administrator, you can create, modify and remove Satellite users. You can also configure access permissions for a user or a group of users by assigning them different roles . 5.1.1. Creating a User Use this procedure to create a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click Create User . In the Login field, enter a username for the user. In the Firstname and Lastname fields, enter the real first name and last name of the user. In the Mail field, enter the user's email address. In the Description field, add a description of the new user. Select a specific language for the user from the Language list. Select a timezone for the user from the Timezone list. By default, Satellite Server uses the language and timezone settings of the user's browser. Set a password for the user: From the Authorized by list, select the source by which the user is authenticated. INTERNAL : to enable the user to be managed inside Satellite Server. EXTERNAL : to configure external authentication as described in Chapter 14, Configuring External Authentication . Enter an initial password for the user in the Password field and the Verify field. Click Submit to create the user. CLI procedure To create a user, enter the following command: The --auth-source-id 1 setting means that the user is authenticated internally, you can specify an external authentication source as an alternative. Add the --admin option to grant administrator privileges to the user. Specifying organization IDs is not required, you can modify the user details later using the update subcommand. For more information about user related subcommands, enter hammer user --help . 5.1.2. Assigning Roles to a User Use this procedure to assign roles to a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click the username of the user to be assigned one or more roles. Note If a user account is not listed, check that you are currently viewing the correct organization. To list all the users in Satellite, click Default Organization and then Any Organization . Click the Locations tab, and select a location if none is assigned. Click the Organizations tab, and check that an organization is assigned. Click the Roles tab to display the list of available roles. Select the roles to assign from the Roles list. To grant all the available permissions, select the Admin checkbox. Click Submit . To view the roles assigned to a user, click the Roles tab; the assigned roles are listed under Selected items . To remove an assigned role, click the role name in Selected items . CLI procedure To assign roles to a user, enter the following command: 5.1.3. Impersonating a Different User Account Administrators can impersonate other authenticated users for testing and troubleshooting purposes by temporarily logging on to the Satellite web UI as a different user. When impersonating another user, the administrator has permissions to access exactly what the impersonated user can access in the system, including the same menus. Audits are created to record the actions that the administrator performs while impersonating another user. However, all actions that an administrator performs while impersonating another user are recorded as having been performed by the impersonated user. Prerequisites Ensure that you are logged on to the Satellite web UI as a user with administrator privileges for Satellite. Procedure In the Satellite web UI, navigate to Administer > Users . To the right of the user that you want to impersonate, from the list in the Actions column, select Impersonate . When you want to stop the impersonation session, in the upper right of the main menu, click the impersonation icon. 5.1.4. Creating an API-Only User You can create users that can interact only with the Satellite API. Prerequisite You have created a user and assigned roles to them. Note that this user must be authorized internally. For more information, see Creating a User and Assigning Roles to a User . Procedure Log in to your Satellite as admin. Navigate to Administer > Users and select a user. On the User tab, set a password. Do not save or communicate this password with others. You can create pseudo-random strings on your console: Create a Personal Access Token for the user. For more information, see Section 5.3.1, "Creating a Personal Access Token" . 5.2. SSH Key Management Adding SSH keys to a user allows deployment of SSH keys during provisioning. For information on deploying SSH keys during provisioning, see Deploying SSH Keys during Provisioning in the Provisioning guide. For information on SSH keys and SSH key creation, see Using SSH-based Authentication in the Red Hat Enterprise Linux 7 System Administrator's Guide . 5.2.1. Managing SSH Keys for a User Use this procedure to add or remove SSH keys for a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you are logged in to the Satellite web UI as an Admin user of Red Hat Satellite or a user with the create_ssh_key permission enabled for adding SSH key and destroy_ssh_key permission for removing a key. Procedure In the Satellite web UI, navigate to Administer > Users . From the Username column, click on the username of the required user. Click on the SSH Keys tab. To Add SSH key Prepare the content of the public SSH key in a clipboard. Click Add SSH Key . In the Key field, paste the public SSH key content from the clipboard. In the Name field, enter a name for the SSH key. Click Submit . To Remove SSH key Click Delete on the row of the SSH key to be deleted. Click OK in the confirmation prompt. CLI procedure To add an SSH key to a user, you must specify either the path to the public SSH key file, or the content of the public SSH key copied to the clipboard. If you have the public SSH key file, enter the following command: If you have the content of the public SSH key, enter the following command: To delete an SSH key from a user, enter the following command: To view an SSH key attached to a user, enter the following command: To list SSH keys attached to a user, enter the following command: 5.3. Managing Personal Access Tokens Personal Access Tokens allow you to authenticate API requests without using your password. You can set an expiration date for your Personal Access Token and you can revoke it if you decide it should expire before the expiration date. 5.3.1. Creating a Personal Access Token Use this procedure to create a Personal Access Token. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to create a Personal Access Token. On the Personal Access Tokens tab, click Add Personal Access Token . Enter a Name for you Personal Access Token. Optional: Select the Expires date to set an expiration date. If you do not set an expiration date, your Personal Access Token will never expire unless revoked. Click Submit. You now have the Personal Access Token available to you on the Personal Access Tokens tab. Important Ensure to store your Personal Access Token as you will not be able to access it again after you leave the page or create a new Personal Access Token. You can click Copy to clipboard to copy your Personal Access Token. Verification Make an API request to your Satellite Server and authenticate with your Personal Access Token: You should receive a response with status 200 , for example: If you go back to Personal Access Tokens tab, you can see the updated Last Used time to your Personal Access Token. 5.3.2. Revoking a Personal Access Token Use this procedure to revoke a Personal Access Token before its expiration date. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to revoke the Personal Access Token. On the Personal Access Tokens tab, locate the Personal Access Token you want to revoke. Click Revoke in the Actions column to the Personal Access Token you want to revoke. Verification Make an API request to your Satellite Server and try to authenticate with the revoked Personal Access Token: You receive the following error message: 5.4. Creating and Managing User Groups 5.4.1. User Groups With Satellite, you can assign permissions to groups of users. You can also create user groups as collections of other user groups. If using an external authentication source, you can map Satellite user groups to external user groups as described in Section 14.4, "Configuring External User Groups" . User groups are defined in an organizational context, meaning that you must select an organization before you can access user groups. 5.4.2. Creating a User Group Use this procedure to create a user group. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Create User group . On the User Group tab, specify the name of the new user group and select group members: Select the previously created user groups from the User Groups list. Select users from the Users list. On the Roles tab, select the roles you want to assign to the user group. Alternatively, select the Admin checkbox to assign all available permissions. Click Submit . CLI procedure To create a user group, enter the following command: 5.4.3. Removing a User Group Use the Satellite web UI to remove a user group. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Delete to the right of the user group you want to delete. In the alert box that appears, click OK to delete a user group. 5.5. Creating and Managing Roles Satellite provides a set of predefined roles with permissions sufficient for standard tasks, as listed in Section 5.6, "Predefined Roles Available in Satellite" . It is also possible to configure custom roles, and assign one or more permission filters to them. Permission filters define the actions allowed for a certain resource type. Certain Satellite plug-ins create roles automatically. 5.5.1. Creating a Role Use this procedure to create a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Create Role . Provide a Name for the role. Click Submit to save your new role. CLI procedure To create a role, enter the following command: To serve its purpose, a role must contain permissions. After creating a role, proceed to Section 5.5.3, "Adding Permissions to a Role" . 5.5.2. Cloning a Role Use the Satellite web UI to clone a role. Procedure In the Satellite web UI, navigate to Administer > Roles and select Clone from the drop-down menu to the right of the required role. Provide a Name for the role. Click Submit to clone the role. Click the name of the cloned role and navigate to Filters . Edit the permissions as required. Click Submit to save your new role. 5.5.3. Adding Permissions to a Role Use this procedure to add permissions to a role. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Roles . Select Add Filter from the drop-down list to the right of the required role. Select the Resource type from the drop-down list. The (Miscellaneous) group gathers permissions that are not associated with any resource group. Click the permissions you want to select from the Permission list. Depending on the Resource type selected, you can select or deselect the Unlimited and Override checkbox. The Unlimited checkbox is selected by default, which means that the permission is applied on all resources of the selected type. When you disable the Unlimited checkbox, the Search field activates. In this field you can specify further filtering with use of the Satellite search syntax. For more information, see Section 5.7, "Granular Permission Filtering" . When you enable the Override checkbox, you can add additional locations and organizations to allow the role to access the resource type in the additional locations and organizations; you can also remove an already associated location and organization from the resource type to restrict access. Click . Click Submit to save changes. CLI procedure List all available permissions: Add permissions to a role: For more information about roles and permissions parameters, enter the hammer role --help and hammer filter --help commands. 5.5.4. Viewing Permissions of a Role Use the Satellite web UI to view the permissions of a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Filters to the right of the required role to get to the Filters page. The Filters page contains a table of permissions assigned to a role grouped by the resource type. It is also possible to generate a complete table of permissions and actions that you can use on your Satellite system. For more information, see Section 5.5.5, "Creating a Complete Permission Table" . 5.5.5. Creating a Complete Permission Table Use the Satellite CLI to create a permission table. Procedure Ensure that the required packages are installed. Execute the following command on Satellite Server: Start the Satellite console with the following command: Insert the following code into the console: The above syntax creates a table of permissions and saves it to the /tmp/table.html file. Press Ctrl + D to exit the Satellite console. Insert the following text at the first line of /tmp/table.html : Append the following text at the end of /tmp/table.html : Open /tmp/table.html in a web browser to view the table. 5.5.6. Removing a Role Use the Satellite web UI to remove a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Select Delete from the drop-down list to the right of the role to be deleted. In an alert box that appears, click OK to delete the role. 5.6. Predefined Roles Available in Satellite The following table provides an overview of permissions that predefined roles in Satellite grant to a user. To view the exact set of permissions a predefined role grants, display the role in Satellite web UI as the privileged user. For more information, see Section 5.5.4, "Viewing Permissions of a Role" . Table 5.1. Permissions provided by role Role Permissions Provided by Role Access Insights Admin Add and edit Insights rules. Access Insights Viewer View Insight reports. Ansible Roles Manager Play roles on hosts and host groups. View, destroy, and import Ansible roles. View, edit, create, destroy, and import Ansible variables. Ansible Tower Inventory Reader View facts, hosts, and host groups. Bookmarks manager Create, edit, and delete bookmarks. Boot disk access Download the boot disk. Compliance manager View, create, edit, and destroy SCAP content files, compliance policies, and tailoring files. View compliance reports. Compliance viewer View compliance reports. Create ARF report Create compliance reports. Default role The set of permissions that every user is granted, irrespective of any other roles. Discovery Manager View, provision, edit, and destroy discovered hosts and manage discovery rules. Discovery Reader View hosts and discovery rules. Edit hosts View, create, edit, destroy, and build hosts. Edit partition tables View, create, edit and destroy partition tables. Manager View and edit global settings. Organization admin All permissions except permissions for managing organizations. An administrator role defined per organization. The role has no visibility into resources in other organizations. By cloning this role and assigning an organization, you can delegate administration of that organization to a user. Red Hat Access Logs View the log viewer and the logs. Remote Execution Manager Control which roles have permission to run infrastructure jobs. Remote Execution User Run remote execution jobs against hosts. Site manager A restrained version of the Manager role. System admin Edit global settings in Administer > Settings . View, create, edit and destroy users, user groups, and roles. View, create, edit, destroy, and assign organizations and locations but not view resources within them. Users with this role can create users and assign all roles to them. Therefore, ensure to give this role only to trusted users. Tasks manager View and edit Satellite tasks. Tasks reader A role that can only view Satellite tasks. Viewer A passive role that provides the ability to view the configuration of every element of the Satellite structure, logs, reports, and statistics. View hosts A role that can only view hosts. Virt-who Manager A role with full virt-who permissions. Virt-who Reporter Upload reports generated by virt-who to Satellite. It can be used if you configure virt-who manually and require a user role that has limited virt-who permissions. Virt-who Viewer View virt-who configurations. Users with this role can deploy virt-who instances using existing virt-who configurations. 5.7. Granular Permission Filtering 5.7.1. Granular Permission Filter As mentioned in Section 5.5.3, "Adding Permissions to a Role" , Red Hat Satellite provides the ability to limit the configured user permissions to selected instances of a resource type. These granular filters are queries to the Satellite database and are supported by the majority of resource types. 5.7.2. Creating a Granular Permission Filter Use this procedure to create a granular filter. To use the CLI instead of the Satellite web UI, see the CLI procedure . Satellite does not apply search conditions to create actions. For example, limiting the create_locations action with name = "Default Location" expression in the search field does not prevent the user from assigning a custom name to the newly created location. Procedure Specify a query in the Search field on the Edit Filter page. Deselect the Unlimited checkbox for the field to be active. Queries have the following form: field_name marks the field to be queried. The range of available field names depends on the resource type. For example, the Partition Table resource type offers family , layout , and name as query parameters. operator specifies the type of comparison between field_name and value . See Section 5.7.4, "Supported Operators for Granular Search" for an overview of applicable operators. value is the value used for filtering. This can be for example a name of an organization. Two types of wildcard characters are supported: underscore (_) provides single character replacement, while percent sign (%) replaces zero or more characters. For most resource types, the Search field provides a drop-down list suggesting the available parameters. This list appears after placing the cursor in the search field. For many resource types, you can combine queries using logical operators such as and , not and has operators. CLI procedure To create a granular filter, enter the hammer filter create command with the --search option to limit permission filters, for example: This command adds to the qa-user role a permission to view, create, edit, and destroy Content Views that only applies to Content Views with name starting with ccv . 5.7.3. Examples of Using Granular Permission Filters As an administrator, you can allow selected users to make changes in a certain part of the environment path. The following filter allows you to work with content while it is in the development stage of the application life cycle, but the content becomes inaccessible once is pushed to production. 5.7.3.1. Applying Permissions for the Host Resource Type The following query applies any permissions specified for the Host resource type only to hosts in the group named host-editors. The following query returns records where the name matches XXXX , Yyyy , or zzzz example strings: You can also limit permissions to a selected environment. To do so, specify the environment name in the Search field, for example: You can limit user permissions to a certain organization or location with the use of the granular permission filter in the Search field. However, some resource types provide a GUI alternative, an Override checkbox that provides the Locations and Organizations tabs. On these tabs, you can select from the list of available organizations and locations. For more information, see Section 5.7.3.2, "Creating an Organization Specific Manager Role" . 5.7.3.2. Creating an Organization Specific Manager Role Use the Satellite web UI to create an administrative role restricted to a single organization named org-1 . Procedure In the Satellite web UI, navigate to Administer > Roles . Clone the existing Organization admin role. Select Clone from the drop-down list to the Filters button. You are then prompted to insert a name for the cloned role, for example org-1 admin . Click the desired locations and organizations to associate them with the role. Click Submit to create the role. Click org-1 admin , and click Filters to view all associated filters. The default filters work for most use cases. However, you can optionally click Edit to change the properties for each filter. For some filters, you can enable the Override option if you want the role to be able to access resources in additional locations and organizations. For example, by selecting the Domain resource type, the Override option, and then additional locations and organizations using the Locations and Organizations tabs, you allow this role to access domains in the additional locations and organizations that is not associated with this role. You can also click New filter to associate new filters with this role. 5.7.4. Supported Operators for Granular Search Table 5.2. Logical Operators Operator Description and Combines search criteria. not Negates an expression. has Object must have a specified property. Table 5.3. Symbolic Operators Operator Description = Is equal to . An equality comparison that is case-sensitive for text fields. != Is not equal to . An inversion of the = operator. ~ Like . A case-insensitive occurrence search for text fields. !~ Not like . An inversion of the ~ operator. ^ In . An equality comparison that is case-sensitive search for text fields. This generates a different SQL query to the Is equal to comparison, and is more efficient for multiple value comparison. !^ Not in . An inversion of the ^ operator. >, >= Greater than , greater than or equal to . Supported for numerical fields only. <, ⇐ Less than , less than or equal to . Supported for numerical fields only. | [
"hammer user create --auth-source-id My_Authentication_Source --login My_User_Name --mail My_User_Mail --organization-ids My_Organization_ID_1 , My_Organization_ID_2 --password My_User_Password",
"hammer user add-role --id user_id --role role_name",
"openssl rand -hex 32",
"hammer user ssh-keys add --user-id user_id --name key_name --key-file ~/.ssh/id_rsa.pub",
"hammer user ssh-keys add --user-id user_id --name key_name --key ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNtYAAABBBHHS2KmNyIYa27Qaa7EHp+2l99ucGStx4P77e03ZvE3yVRJEFikpoP3MJtYYfIe8k 1/46MTIZo9CPTX4CYUHeN8= host@user",
"hammer user ssh-keys delete --id key_id --user-id user_id",
"hammer user ssh-keys info --id key_id --user-id user_id",
"hammer user ssh-keys list --user-id user_id",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{\"satellite_version\":\"6.11.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }",
"hammer user-group create --name My_User_Group_Name --role-ids My_Role_ID_1 , My_Role_ID_2 --user-ids My_User_ID_1 , My_User_ID_2",
"hammer role create --name My_Role_Name",
"hammer filter available-permissions",
"hammer filter create --permission-ids My_Permission_ID_1 , My_Permission_ID_2 --role My_Role_Name",
"satellite-maintain packages install foreman-console",
"foreman-rake console",
"f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)",
"<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>",
"</table>",
"field_name operator value",
"hammer filter create --permission-ids 91 --search \"name ~ ccv*\" --role qa-user",
"hostgroup = host-editors",
"name ^ (XXXX, Yyyy, zzzz)",
"Dev"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/Managing_Users_and_Roles_admin |
Appendix B. Using AMQ Broker with the examples | Appendix B. Using AMQ Broker with the examples The AMQ JavaScript examples require a running message broker with a queue named examples . Use the procedures below to install and start the broker and define the queue. B.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . B.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... B.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named examples . USD <broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. B.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-05-07 10:16:18 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_javascript_client/using_the_broker_with_the_examples |
Chapter 7. Troubleshooting | Chapter 7. Troubleshooting 7.1. About troubleshooting Amazon EC2 EC2 provides an Alarm Status for each instance, indicating severe instance malfunction but the absence of such an alarm is no guarantee that the instance has started correctly and services are running properly. It is possible to use Amazon CloudWatch with its custom metric functionality to monitor instance services' health but use of an enterprise management solution is recommended. 7.2. Diagnostic information In case of a problem being detected by the JBoss Operations Network, Amazon CloudWatch or manual inspection, common sources of diagnostic information are: /var/log also contains all the logs collected from machine startup, JBoss EAP, httpd and most other services. JBoss EAP log files can be found in /opt/rh/eap8/root/usr/share/wildfly/ . Access to these files is only available using an SSH session. See Getting Started with Amazon EC2 Linux Instances for more information about how to configure and establish an SSH session with an Amazon EC2 instance. Revised on 2024-05-10 16:25:21 UTC | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/deploying_jboss_eap_on_amazon_web_services/assembly-troubleshoot-amazon-ec2_default |
Chapter 2. Encryption and Key Management | Chapter 2. Encryption and Key Management Inter-device communication is a serious security concern. Secure methods of communication over a network are becoming increasingly important, as demonstrated by significant vulnerabilities such as Heartbleed, or more advanced attacks such as BEAST and CRIME. However, encryption is only one part of a larger security strategy. The compromise of an endpoint means that an attacker no longer needs to break the encryption used, but is able to view and manipulate messages as they are processed by the system. This chapter will review features around configuring Transport Layer Security (TLS) to secure both internal and external resources, and will call out specific categories of systems that should be given specific attention. OpenStack components communicate with each other using various protocols and the communication might involve sensitive or confidential data. An attacker might try to eavesdrop on the channel in order to get access to sensitive information. It is therefore important that all the components must communicate with each other using a secured communication protocol. 2.1. Introduction to TLS and SSL There are situations where there is a security requirement to assure the confidentiality or integrity of network traffic in an OpenStack deployment. You would generally configure this using cryptographic measures, such as the TLS protocol. In a typical deployment, all traffic transmitted over public networks should be security hardened, but security good practice expects that internal traffic must also be secured. It is insufficient to rely on security zone separation for protection. If an attacker gains access to the hypervisor or host resources, compromises an API endpoint, or any other service, they must not be able to easily inject or capture messages, commands, or otherwise affect the management capabilities of the cloud. You should security harden all zones with TLS, including the management zone services and intra-service communications. TLS provides the mechanisms to ensure authentication, non-repudiation, confidentiality, and integrity of user communications to the OpenStack services, and between the OpenStack services themselves. Due to the published vulnerabilities in the Secure Sockets Layer (SSL) protocols, consider using TLS 1.2 or higher in preference to SSL, and that SSL is disabled in all cases, unless you require compatibility with obsolete browsers or libraries. 2.1.1. Public Key Infrastructure Public Key Infrastructure (PKI) is a framework on which to provide encryption algorithms, cipher modes, and protocols for securing data and authentication. It consists of a set of systems and processes to ensure traffic can be sent encrypted while validating the identity of the parties. The PKI profile described here is the Internet Engineering Task Force (IETF) Public Key Infrastructure (PKIX) profile developed by the PKIX working group. The core components of PKI are: Digital Certificates - Signed public key certificates are data structures that have verifiable data of an entity, its public key along with some other attributes. These certificates are issued by a Certificate Authority (CA). As the certificates are signed by a CA that is trusted, once verified, the public key associated with the entity is guaranteed to be associated with the said entity. The most common standard used to define these certificates is the X.509 standard. The X.509 v3 which is the current standard is described in detail in RFC5280, and updated by RFC6818. Certificates are issued by CAs as a mechanism to prove the identity of online entities. The CA digitally signs the certificate by creating a message digest from the certificate and encrypting the digest with its private key. End entity - The user, process, or system that is the subject of a certificate. The end entity sends its certificate request to a Registration Authority (RA) for approval. If approved, the RA forwards the request to a Certification Authority (CA). The Certification Authority verifies the request and if the information is correct, a certificate is generated and signed. This signed certificate is then send to a Certificate Repository. Relying party - The endpoint that receives the digitally signed certificate that is verifiable with reference to the public key listed on the certificate. The relying party should be in a position to verify the certificate up the chain, ensure that it is not present in the CRL and also must be able to verify the expiry date on the certificate. Certification Authority (CA) - CA is a trusted entity, both by the end party and the party that relies upon the certificate for certification policies, management handling, and certificate issuance. Registration Authority (RA) - An optional system to which a CA delegates certain management functions, this includes functions such as, authentication of end entities before they are issued a certificate by a CA. Certificate Revocation Lists (CRL) - A Certificate Revocation List (CRL) is a list of certificate serial numbers that have been revoked. End entities presenting these certificates should not be trusted in a PKI model. Revocation can happen because of several reasons for example, key compromise, CA compromise. CRL issuer - An optional system to which a CA delegates the publication of certificate revocation lists. Certificate Repository - The location where the end entity certificates and certificate revocation lists are stored and queried - sometimes referred to as the certificate bundle. It is strongly recommend you security harden all services using Public Key Infrastructure (PKI), including using TLS for API endpoints. It is impossible for the encryption or signing of transports or messages alone to solve all these problems. In addition, hosts themselves must be hardened and implement policies, namespaces, and other controls to protect their private credentials and keys. However, the challenges of key management and protection do not reduce the necessity of these controls, or lessen their importance. 2.1.2. Certification Authorities Many organizations have an established Public Key Infrastructure with their own Certification Authority (CA), certificate policies, and management for which they should use to issue certificates for internal OpenStack users or services. Organizations in which the public security zone is Internet facing will additionally need certificates signed by a widely recognized public CA. For cryptographic communications over the management network, it is recommended one not use a public CA. Instead, the recommendation is that most deployments deploy their own internal CA. Note Effective use of TLS relies on the deployment being given a domain or subdomain in DNS which can be used by either a wildcard, or series of specific certificates issues by either a public or internal CA. To ensure TLS certificates can be effectively validated, access to platform services would need to be through these DNS records. It is recommended that the OpenStack cloud architect consider using separate PKI deployments for internal systems and customer facing services. This allows the cloud deployer to maintain control of their PKI infrastructure and makes requesting, signing and deploying certificates for internal systems easier. Advanced configurations might use separate PKI deployments for different security zones. This allows OpenStack operators to maintain cryptographic separation of environments, ensuring that certificates issued to one are not recognized by another. Certificates used to support TLS on internet facing cloud endpoints (or customer interfaces where the customer is not expected to have installed anything other than standard operating system provided certificate bundles) should be provisioned using Certificate Authorities that are installed in the operating system certificate bundle. Note There are management, policy, and technical challenges around creating and signing certificates. This is an area where cloud architects or operators might wish to seek the advice of industry leaders and vendors in addition to the guidance recommended here. 2.1.3. Configuring Encryption using Director By default, the overcloud uses unencrypted endpoints for its services. This means that the overcloud configuration requires an additional environment file to enable SSL/TLS for its Public API endpoints. The Advanced Overcloud Customization guide describes how to configure your SSL/TLS certificate and include it as a part of your overcloud creation process: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/sect-enabling_ssltls_on_the_overcloud 2.1.4. TLS libraries Certain components, services, and applications within the OpenStack ecosystem can be configured to use TLS libraries. The TLS and HTTP services within OpenStack are typically implemented using OpenSSL, which has a module that has been validated for FIPS 140-2. However, consider that each application or service can still introduce weaknesses in how they use the OpenSSL libraries. 2.1.5. Deprecation of TLS 1.0 Important FedRAMP-authorized systems are required to move away from TLS 1.0. The recommended level is 1.2, and 1.1 is acceptable only if broad compatibility is required. For more information, see https://www.fedramp.gov/assets/resources/documents/CSP_TLS_Requirements.pdf . For Red Hat OpenStack Platform 13 deployments, TLS 1.0 connections are not accepted by HAProxy, which handles TLS connections for TLS enabled APIs. This is implemented by the no-tlsv10 option. For HA deployments with InternalTLS enabled, cross-node traffic on the controller plane is also encrypted. This includes RabbitMQ, MariaDB, and Redis, among others. MariaDB and Redis have deprecated TLS1.0, and the same deprecation for RabbitMQ is expected to be backported from upstream. 2.1.5.1. Checking whether TLS 1.0 is in use You can use cipherscan to determine whether TLS 1.0 is being presented by your deployment. Cipherscan can be cloned from https://github.com/mozilla/cipherscan . This example output demonstrates results received from horizon : Note Run cipherscan from a non-production system, as it might install additional dependencies when you first run it. When scanning a server, Cipherscan advertises support for a specific TLS version, which is the highest TLS version it is willing to negotiate. If the target server correctly follows TLS protocol, it will respond with the highest version that is mutually supported, which may be lower than what Cipherscan initially advertised. If the server does proceed to establish a connection with the client using that specific version, it is not considered to be intolerant to that protocol version. If it does not establish the connection (with the specified version, or any lower version), then intolerance for that version of protocol is considered to be present. For example: In this output, intolerance of TLS 1.0 and TLS 1.1 is reported as PRESENT , meaning that the connection could not be established, and that Cipherscan was unable to connect while advertising support for those TLS versions. As a result, it is reasonable to conclude that those (and any lower) versions of the protocol are not enabled on the scanned server. 2.1.6. Cryptographic algorithms, cipher modes, and protocols You should consider only using TLS 1.2. Other versions, such as TLS 1.0 and 1.1, are vulnerable to multiple attacks and are expressly forbidden by many government agencies and regulated industries. TLS 1.0 should be disabled in your environment. TLS 1.1 might be used for broad client compatibility, however exercise caution when enabling this protocol. Only enable TLS version 1.1 if there is a mandatory compatibility requirement and if you are aware of the risks involved. All versions of SSL (the predecessor to TLS) must not be used due to multiple public vulnerabilities. When you are using TLS 1.2 and control both the clients and the server, the cipher suite should be limited to ECDHE-ECDSA-AES256-GCM-SHA384 . In circumstances where you do not control both endpoints and are using TLS 1.1 or 1.2 the more general HIGH:!aNULL:!eNULL:!DES:!3DES:!SSLv3:!TLSv1:!CAMELLIA is a reasonable cipher selection. Note This guide is not intended as a reference on cryptography, and is not prescriptive about what specific algorithms or cipher modes you should enable or disable in your OpenStack services. 2.2. TLS Proxies and HTTP Services OpenStack endpoints are HTTP services providing APIs to both end-users on public networks and to other OpenStack services on the management network. You can currently encrypt the external requests using TLS. To configure this in Red Hat OpenStack Platform, you can deploy the API services behind HAproxy, which is able to establish and terminate TLS sessions. In cases where software termination offers insufficient performance, hardware accelerators might be worth exploring as an alternative option. This approach would require additional configuration on the platform, and not all hardware load balancers might be compatible with Red Hat OpenStack Platform. It is important to be mindful of the size of requests that will be processed by any chosen TLS proxy. 2.2.1. Perfect Forward Secrecy Configuring TLS servers for perfect forward secrecy requires careful planning around key size, session IDs, and session tickets. In addition, for multi-server deployments, shared state is also an important consideration. Real-world deployments might consider enabling this feature for improved performance. This can be done in a security hardened way, but would require special consideration around key management. Such configurations are beyond the scope of this guide. 2.3. Use Barbican to manage secrets OpenStack Key Manager (barbican) is the secrets manager for Red Hat OpenStack Platform. You can use the barbican API and command line to centrally manage the certificates, keys, and passwords used by OpenStack services. Barbican currently supports the following use cases: Symmetric encryption keys - used for Block Storage (cinder) volume encryption, ephemeral disk encryption, and Object Storage (swift) object encryption. Asymmetric keys and certificates - glance image signing and verification, Octavia TLS load balancing. In this release, barbican offers integration with the cinder, swift, Octavia, and Compute (nova) components. For example, you can use barbican for the following use cases: Support for Encrypted Volumes - You can use barbican to manage your cinder encryption keys. This configuration uses LUKS to encrypt the disks attached to your instances, including boot disks. The key management aspect is performed transparently to the user. Glance Image Signing - You can configure the Image Service (glance) to verify that an uploaded image has not been tampered with. The image is first signed with a key that is stored in barbican, with the image then being validated before each use. For more information, see the Barbican guide: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/manage_secrets_with_openstack_key_manager/ | [
"./cipherscan https://openstack.lab.local ........................... Target: openstack.lab.local:443 prio ciphersuite protocols pfs curves 1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 ECDH,P-256,256bits prime256v1 2 ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 ECDH,P-256,256bits prime256v1 3 DHE-RSA-AES128-GCM-SHA256 TLSv1.2 DH,1024bits None 4 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 DH,1024bits None 5 ECDHE-RSA-AES128-SHA256 TLSv1.2 ECDH,P-256,256bits prime256v1 6 ECDHE-RSA-AES256-SHA384 TLSv1.2 ECDH,P-256,256bits prime256v1 7 ECDHE-RSA-AES128-SHA TLSv1.2 ECDH,P-256,256bits prime256v1 8 ECDHE-RSA-AES256-SHA TLSv1.2 ECDH,P-256,256bits prime256v1 9 DHE-RSA-AES128-SHA256 TLSv1.2 DH,1024bits None 10 DHE-RSA-AES128-SHA TLSv1.2 DH,1024bits None 11 DHE-RSA-AES256-SHA256 TLSv1.2 DH,1024bits None 12 DHE-RSA-AES256-SHA TLSv1.2 DH,1024bits None 13 ECDHE-RSA-DES-CBC3-SHA TLSv1.2 ECDH,P-256,256bits prime256v1 14 EDH-RSA-DES-CBC3-SHA TLSv1.2 DH,1024bits None 15 AES128-GCM-SHA256 TLSv1.2 None None 16 AES256-GCM-SHA384 TLSv1.2 None None 17 AES128-SHA256 TLSv1.2 None None 18 AES256-SHA256 TLSv1.2 None None 19 AES128-SHA TLSv1.2 None None 20 AES256-SHA TLSv1.2 None None 21 DES-CBC3-SHA TLSv1.2 None None Certificate: trusted, 2048 bits, sha256WithRSAEncryption signature TLS ticket lifetime hint: None NPN protocols: None OCSP stapling: not supported Cipher ordering: server Curves ordering: server - fallback: no Server supports secure renegotiation Server supported compression methods: NONE TLS Tolerance: yes Intolerance to: SSL 3.254 : absent TLS 1.0 : PRESENT TLS 1.1 : PRESENT TLS 1.2 : absent TLS 1.3 : absent TLS 1.4 : absent",
"Intolerance to: SSL 3.254 : absent TLS 1.0 : PRESENT TLS 1.1 : PRESENT TLS 1.2 : absent TLS 1.3 : absent TLS 1.4 : absent"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/encryption_and_key_management |
14.3. Locking Types | 14.3. Locking Types 14.3.1. About Optimistic Locking Optimistic locking allows multiple transactions to complete simultaneously by deferring lock acquisition to the transaction prepare time. Optimistic mode assumes that multiple transactions can complete without conflict. It is ideal where there is little contention between multiple transactions running concurrently, as transactions can commit without waiting for other transaction locks to clear. With writeSkewCheck enabled, transactions in optimistic locking mode roll back if one or more conflicting modifications are made to the data before the transaction completes. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 14.3.2. About Pessimistic Locking Pessimistic locking is also known as eager locking. Pessimistic locking prevents more than one transaction to modify a value of a key by enforcing cluster-wide locks on each write operation. Locks are only released once the transaction is completed either through committing or being rolled back. Pessimistic mode is used where a high contention on keys is occurring, resulting in inefficiencies and unexpected roll back operations. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 14.3.3. Pessimistic Locking Types Red Hat JBoss Data Grid includes explicit pessimistic locking and implicit pessimistic locking: Explicit Pessimistic Locking, which uses the JBoss Data Grid Lock API to allow cache users to explicitly lock cache keys for the duration of a transaction. The Lock call attempts to obtain locks on specified cache keys across all nodes in a cluster. This attempt either fails or succeeds for all specified cache keys. All locks are released during the commit or rollback phase. Implicit Pessimistic Locking ensures that cache keys are locked in the background as they are accessed for modification operations. Using Implicit Pessimistic Locking causes JBoss Data Grid to check and ensure that cache keys are locked locally for each modification operation. Discovering unlocked cache keys causes JBoss Data Grid to request a cluster-wide lock to acquire a lock on the unlocked cache key. Report a bug 14.3.4. Explicit Pessimistic Locking Example The following is an example of explicit pessimistic locking that depicts a transaction that runs on one of the cache nodes: Procedure 14.3. Transaction with Explicit Pessimistic Locking When the line cache.lock(K) executes, a cluster-wide lock is acquired on K . When the line cache.put(K,V5) executes, it guarantees success. When the line tx.commit() executes, the locks held for this process are released. Report a bug 14.3.5. Implicit Pessimistic Locking Example An example of implicit pessimistic locking using a transaction that runs on one of the cache nodes is as follows: Procedure 14.4. Transaction with Implicit Pessimistic locking When the line cache.put(K,V) executes, a cluster-wide lock is acquired on K . When the line cache.put(K2,V2) executes, a cluster-wide lock is acquired on K2 . When the line cache.put(K,V5) executes, the lock acquisition is non operational because a cluster-wide lock for K has been previously acquired. The put operation will still occur. When the line tx.commit() executes, all locks held for this transaction are released. Report a bug 14.3.6. Configure Locking Mode (Remote Client-Server Mode) To configure a locking mode in Red Hat JBoss Data Grid's Remote Client-Server mode, use the transaction element as follows: Report a bug 14.3.7. Configure Locking Mode (Library Mode) In Red Hat JBoss Data Grid's Library mode, the locking mode is set within the transaction element as follows: Set the lockingMode value to OPTIMISTIC or PESSIMISTIC to configure the locking mode used for the transactional cache. Report a bug | [
"tx.begin() cache.lock(K) cache.put(K,V5) tx.commit()",
"tx.begin() cache.put(K,V) cache.put(K2,V2) cache.put(K,V5) tx.commit()",
"<transaction locking=\"{OPTIMISTIC/PESSIMISTIC}\" />",
"<transaction transactionManagerLookupClass=\"{TransactionManagerLookupClass}\" transactionMode=\"{TRANSACTIONAL,NON_TRANSACTIONAL}\" lockingMode=\"{OPTIMISTIC,PESSIMISTIC}\" useSynchronization=\"true\"> </transaction>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-locking_types |
C.3. Rule Instances | C.3. Rule Instances This section discusses the rule instances that have been set. C.3.1. LdapCaCertRule The LdapCaCertRule can be used to publish CA certificates to an LDAP directory. Table C.11. LdapCaCert Rule Configuration Parameters Parameter Value Description type cacert Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapCaCertMap Specifies the mapper used with the rule. See Section C.2.1.1, "LdapCaCertMap" for details on the mapper. publisher LdapCaCertPublisher Specifies the publisher used with the rule. See Section C.1.2, "LdapCaCertPublisher" for details on the publisher. C.3.2. LdapXCertRule The LdapXCertRule is used to publish cross-pair certificates to an LDAP directory. Table C.12. LdapXCert Rule Configuration Parameters Parameter Value Description type xcert Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapCaCertMap Specifies the mapper used with the rule. See Section C.2.1.1, "LdapCaCertMap" for details on the mapper. publisher LdapCrossCertPairPublisher Specifies the publisher used with the rule. See Section C.1.6, "LdapCertificatePairPublisher" for details on this publisher. C.3.3. LdapUserCertRule The LdapUserCertRule is used to publish user certificates to an LDAP directory. Table C.13. LdapUserCert Rule Configuration Parameters Parameter Value Description type certs Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapUserCertMap Specifies the mapper used with the rule. See Section C.2.3, "LdapSimpleMap" for details on the mapper. publisher LdapUserCertPublisher Specifies the publisher used with the rule. See Section C.1.3, "LdapUserCertPublisher" for details on the publisher. C.3.4. LdapCRLRule The LdapCRLRule is used to publish CRLs to an LDAP directory. Table C.14. LdapCRL Rule Configuration Parameters Parameter Value Description type crl Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapCrlMap Specifies the mapper used with the rule. See Section C.2.1.2, "LdapCrlMap" for details on the mapper. publisher LdapCrlPublisher Specifies the publisher used with the rule. See Section C.1.4, "LdapCrlPublisher" for details on the publisher. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Rule_Instances |
Chapter 7. Installing director on the undercloud | Chapter 7. Installing director on the undercloud To configure and install director, set the appropriate parameters in the undercloud.conf file and run the undercloud installation command. After you have installed director, import the overcloud images that director will use to write to bare metal nodes during node provisioning. 7.1. Configuring director The director installation process requires certain settings in the undercloud.conf configuration file, which director reads from the home directory of the stack user. Complete the following steps to copy default template as a foundation for your configuration. Procedure Copy the default template to the home directory of the stack user's: Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value. 7.2. Director configuration parameters The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors. Important At minimum, you must set the container_images_file parameter to the environment file that contains your container image configuration. Without this parameter properly set to the appropriate file, director cannot obtain your container image rule set from the ContainerImagePrepare parameter nor your container registry authentication details from the ContainerImageRegistryCredentials parameter. Defaults The following parameters are defined in the [DEFAULT] section of the undercloud.conf file: additional_architectures A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports only the x86_64 architecture. certificate_generation_ca The certmonger nickname of the CA that signs the requested certificate. Use this option only if you have set the generate_service_certificate parameter. If you select the local CA, certmonger extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds the certificate to the trust chain. clean_nodes Defines whether to wipe the hard drive between deployments and after introspection. cleanup Cleanup temporary files. Set this to False to leave the temporary files used during deployment in place after you run the deployment command. This is useful for debugging the generated files or if errors occur. container_cli The CLI tool for container management. Leave this parameter set to podman . Red Hat Enterprise Linux 9.0 only supports podman . container_healthcheck_disabled Disables containerized service health checks. Red Hat recommends that you enable health checks and leave this option set to false . container_images_file Heat environment file with container image information. This file can contain the following entries: Parameters for all required container images The ContainerImagePrepare parameter to drive the required image preparation. Usually the file that contains this parameter is named containers-prepare-parameter.yaml . container_insecure_registries A list of insecure registries for podman to use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases, podman has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite Server if the undercloud is registered to Satellite. container_registry_mirror An optional registry-mirror configured that podman uses. custom_env_files Additional environment files that you want to add to the undercloud installation. deployment_user The user who installs the undercloud. Leave this parameter unset to use the current default user stack . discovery_default_driver Sets the default driver for automatically enrolled nodes. Requires the enable_node_discovery parameter to be enabled and you must include the driver in the enabled_hardware_types list. enable_ironic; enable_ironic_inspector; enable_tempest; enable_validations Defines the core services that you want to enable for director. Leave these parameters set to true . enable_node_discovery Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the fake driver as a default but you can set discovery_default_driver to override. You can also use introspection rules to specify driver information for newly enrolled nodes. enable_routed_networks Defines whether to enable support for routed control plane networks. enabled_hardware_types A list of hardware types that you want to enable for the undercloud. generate_service_certificate Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the undercloud_service_certificate parameter. The undercloud installation saves the resulting certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem . The CA defined in the certificate_generation_ca parameter signs this certificate. heat_container_image URL for the heat container image to use. Leave unset. heat_native Run host-based undercloud configuration using heat-all . Leave as true . hieradata_override Path to hieradata override file that configures Puppet hieradata on the director, providing custom configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. For more information about using this feature, see Configuring hieradata on the undercloud . inspection_extras Defines whether to enable extra hardware collection during the inspection process. This parameter requires the python-hardware or python-hardware-detect packages on the introspection image. inspection_interface The bridge that director uses for node introspection. This is a custom bridge that the director configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default br-ctlplane . inspection_runbench Runs a set of benchmarks during node introspection. Set this parameter to true to enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. ipv6_address_mode IPv6 address configuration mode for the undercloud provisioning network. The following list contains the possible values for this parameter: dhcpv6-stateless - Address configuration using router advertisement (RA) and optional information using DHCPv6. dhcpv6-stateful - Address configuration and optional information using DHCPv6. ipxe_enabled Defines whether to use iPXE or standard PXE. The default is true , which enables iPXE. Set this parameter to false to use standard PXE. For PowerPC deployments, or for hybrid PowerPC and x86 deployments, set this value to false . local_interface The chosen interface for the director Provisioning NIC. This is also the device that director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the ip addr command. For example, this is the result of an ip addr command: In this example, the External NIC uses em0 and the Provisioning NIC uses em1 , which is currently not configured. In this case, set the local_interface to em1 . The configuration script attaches this interface to a custom bridge defined with the inspection_interface parameter. local_ip The IP address defined for the director Provisioning NIC. This is also the IP address that director uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you use a different subnet for the Provisioning network, for example, if this IP address conflicts with an existing IP address or subnet in your environment. For IPv6, the local IP address prefix length must be /64 to support both stateful and stateless connections. local_mtu The maximum transmission unit (MTU) that you want to use for the local_interface . Do not exceed 1500 for the undercloud. local_subnet The local subnet that you want to use for PXE boot and DHCP interfaces. The local_ip address should reside in this subnet. The default is ctlplane-subnet . net_config_override Path to network configuration override template. If you set this parameter, the undercloud uses a JSON or YAML format template to configure the networking with os-net-config and ignores the network parameters set in undercloud.conf . Use this parameter when you want to configure bonding or add an option to the interface. For more information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . networks_file Networks file to override for heat . output_dir Directory to output state, processed heat templates, and Ansible deployment files. overcloud_domain_name The DNS domain name that you want to use when you deploy the overcloud. Note When you configure the overcloud, you must set the CloudDomain parameter to a matching value. Set this parameter in an environment file when you configure your overcloud. roles_file The roles file that you want to use to override the default roles file for undercloud installation. It is highly recommended to leave this parameter unset so that the director installation uses the default roles file. scheduler_max_attempts The maximum number of times that the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to avoid potential race conditions when scheduling. service_principal The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA. subnets List of routed network subnets for provisioning and introspection. The default value includes only the ctlplane-subnet subnet. For more information, see Subnets . templates Heat templates file to override. undercloud_admin_host The IP address or hostname defined for director admin API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_admin_host is not in the same IP network as the local_ip , you must configure the interface on which you want the admin APIs on the undercloud to listen. By default, the admin APIs listen on the br-ctlplane interface. For information about how to configure undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_debug Sets the log level of undercloud services to DEBUG . Set this value to true to enable DEBUG log level. undercloud_enable_selinux Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to true unless you are debugging an issue. undercloud_hostname Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but you must configure all system host name settings appropriately. undercloud_log_file The path to a log file to store the undercloud install and upgrade logs. By default, the log file is install-undercloud.log in the home directory. For example, /home/stack/install-undercloud.log . undercloud_nameservers A list of DNS nameservers to use for the undercloud hostname resolution. undercloud_ntp_servers A list of network time protocol servers to help synchronize the undercloud date and time. undercloud_public_host The IP address or hostname defined for director public API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_public_host is not in the same IP network as the local_ip , you must set the PublicVirtualInterface parameter to the public-facing interface on which you want the public APIs on the undercloud to listen. By default, the public APIs listen on the br-ctlplane interface. Set the PublicVirtualInterface parameter in a custom environment file, and include the custom environment file in the undercloud.conf file by configuring the custom_env_files parameter. For information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_service_certificate The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate. undercloud_timezone Host timezone for the undercloud. If you do not specify a timezone, director uses the existing timezone configuration. undercloud_update_packages Defines whether to update packages during the undercloud installation. Subnets Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet , use the following sample in your undercloud.conf file: You can specify as many provisioning networks as necessary to suit your environment. Important Director cannot change the IP addresses for a subnet after director creates the subnet. cidr The network that director uses to manage overcloud instances. This is the Provisioning network, which the undercloud neutron service manages. Leave this as the default 192.168.24.0/24 unless you use a different subnet for the Provisioning network. masquerade Defines whether to masquerade the network defined in the cidr for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that the Provisioning network has external access through director. Note The director configuration also enables IP forwarding automatically using the relevant sysctl kernel parameter. dhcp_start; dhcp_end The start and end of the DHCP allocation range for overcloud nodes. Ensure that this range contains enough IP addresses to allocate to your nodes. If not specified for the subnet, director determines the allocation pools by removing the values set for the local_ip , gateway , undercloud_admin_host , undercloud_public_host , and inspection_iprange parameters from the subnets full IP range. You can configure non-contiguous allocation pools for undercloud control plane subnets by specifying a list of start and end address pairs. Alternatively, you can use the dhcp_exclude option to exclude IP addresses within an IP address range. For example, the following configurations both create allocation pools 172.20.0.100-172.20.0.150 and 172.20.0.200-172.20.0.250 : Option 1 Option 2 dhcp_exclude IP addresses to exclude in the DHCP allocation range. For example, the following configuration excludes the IP address 172.20.0.105 and the IP address range 172.20.0.210-172.20.0.219 : dns_nameservers DNS nameservers specific to the subnet. If no nameservers are defined for the subnet, the subnet uses nameservers defined in the undercloud_nameservers parameter. gateway The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default 192.168.24.1 unless you use a different IP address for director or want to use an external gateway directly. host_routes Host routes for the Neutron-managed subnet for the overcloud instances on this network. This also configures the host routes for the local_subnet on the undercloud. inspection_iprange Temporary IP range for nodes on this network to use during the inspection process. This range must not overlap with the range defined by dhcp_start and dhcp_end but must be in the same IP subnet. Modify the values of these parameters to suit your configuration. When complete, save the file. 7.3. Configuring the undercloud with environment files You configure the main parameters for the undercloud through the undercloud.conf file. You can also perform additional undercloud configuration with an environment file that contains heat parameters. Procedure Create an environment file named /home/stack/templates/custom-undercloud-params.yaml . Edit this file and include your heat parameters. For example, to enable debugging for certain OpenStack Platform services include the following snippet in the custom-undercloud-params.yaml file: Save this file when you have finished. Edit your undercloud.conf file and scroll to the custom_env_files parameter. Edit the parameter to point to your custom-undercloud-params.yaml environment file: Note You can specify multiple environment files using a comma-separated list. The director installation includes this environment file during the undercloud installation or upgrade operation. 7.4. Common heat parameters for undercloud configuration The following table contains some common heat parameters that you might set in a custom environment file for your undercloud. Parameter Description AdminPassword Sets the undercloud admin user password. AdminEmail Sets the undercloud admin user email address. Debug Enables debug mode. Set these parameters in your custom environment file under the parameter_defaults section: 7.5. Configuring hieradata on the undercloud You can provide custom configuration for services beyond the available undercloud.conf parameters by configuring Puppet hieradata on the director. Procedure Create a hieradata override file, for example, /home/stack/hieradata.yaml . Add the customized hieradata to the file. For example, add the following snippet to modify the Compute (nova) service parameter force_raw_images from the default value of True to False : If there is no Puppet implementation for the parameter you want to set, then use the following method to configure the parameter: For example: Set the hieradata_override parameter in the undercloud.conf file to the path of the new /home/stack/hieradata.yaml file: 7.6. Configuring the undercloud for bare metal provisioning over IPv6 If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. However, there are some considerations: Dual stack IPv4/6 is not available. Tempest validations might not perform correctly. IPv4 to IPv6 migration is not available during upgrades. Modify the undercloud.conf file to enable IPv6 provisioning in Red Hat OpenStack Platform. Prerequisites An IPv6 address on the undercloud. For more information, see Configuring an IPv6 address on the undercloud in the IPv6 Networking for the Overcloud guide. Procedure Open your undercloud.conf file. Specify the IPv6 address mode as either stateless or stateful: Replace <address_mode> with dhcpv6-stateless or dhcpv6-stateful , based on the mode that your NIC supports. Note When you use the stateful address mode, the firmware, chain loaders, and operating systems might use different algorithms to generate an ID that the DHCP server tracks. DHCPv6 does not track addresses by MAC, and does not provide the same address back if the identifier value from the requester changes but the MAC address remains the same. Therefore, when you use stateful DHCPv6 you must also complete the step to configure the network interface. If you configured your undercloud to use stateful DHCPv6, specify the network interface to use for bare metal nodes: Set the default network interface for bare metal nodes: Specify whether or not the undercloud should create a router on the provisioning network: Replace <true/false> with true to enable routed networks and prevent the undercloud creating a router on the provisioning network. When true , the data center router must provide router advertisements. Replace <true/false> with false to disable routed networks and create a router on the provisioning network. Configure the local IP address, and the IP address for the director Admin API and Public API endpoints over SSL/TLS: Replace <ipv6_address> with the IPv6 address of the undercloud. Optional: Configure the provisioning network that director uses to manage instances: Replace <ipv6_address> with the IPv6 address of the network to use for managing instances when not using the default provisioning network. Replace <ipv6_prefix> with the IP address prefix of the network to use for managing instances when not using the default provisioning network. Configure the DHCP allocation range for provisioning nodes: Replace <ipv6_address_dhcp_start> with the IPv6 address of the start of the network range to use for the overcloud nodes. Replace <ipv6_address_dhcp_end> with the IPv6 address of the end of the network range to use for the overcloud nodes. Optional: Configure the gateway for forwarding traffic to the external network: Replace <ipv6_gateway_address> with the IPv6 address of the gateway when not using the default gateway. Configure the DHCP range to use during the inspection process: Replace <ipv6_address_inspection_start> with the IPv6 address of the start of the network range to use during the inspection process. Replace <ipv6_address_inspection_end> with the IPv6 address of the end of the network range to use during the inspection process. Note This range must not overlap with the range defined by dhcp_start and dhcp_end , but must be in the same IP subnet. Configure an IPv6 nameserver for the subnet: Replace <ipv6_dns> with the DNS nameservers specific to the subnet. 7.7. Configuring undercloud network interfaces Include custom network configuration in the undercloud.conf file to install the undercloud with specific networking functionality. For example, some interfaces might not have DHCP. In this case, you must disable DHCP for these interfaces in the undercloud.conf file so that os-net-config can apply the configuration during the undercloud installation process. Procedure Log in to the undercloud host. Create a new file undercloud-os-net-config.yaml and include the network configuration that you require. For more information, see Network interface reference . Here is an example: To create a network bond for a specific interface, use the following sample: Include the path to the undercloud-os-net-config.yaml file in the net_config_override parameter in the undercloud.conf file: Note Director uses the file that you include in the net_config_override parameter as the template to generate the /etc/os-net-config/config.yaml file. os-net-config manages the interfaces that you define in the template, so you must perform all undercloud network interface customization in this file. Install the undercloud. Verification After the undercloud installation completes successfully, verify that the /etc/os-net-config/config.yaml file contains the relevant configuration: 7.8. Installing director Complete the following steps to install director and perform some basic post-installation tasks. Procedure Run the following command to install director on the undercloud: This command launches the director configuration script. Director installs additional packages and configures its services according to the configuration in the undercloud.conf . This script takes several minutes to complete. The script generates two files: /home/stack/tripleo-deploy/undercloud/tripleo-undercloud-passwords.yaml - A list of all passwords for the director services. /home/stack/stackrc - A set of initialization variables to help you access the director command line tools. The script also starts all OpenStack Platform service containers automatically. You can check the enabled containers with the following command: To initialize the stack user to use the command line tools, run the following command: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud; The director installation is complete. You can now use the director command line tools. 7.9. Obtaining images for overcloud nodes Director requires several disk images to provision overcloud nodes: An introspection kernel and ramdisk for bare metal system introspection over PXE boot. A deployment kernel and ramdisk for system provisioning and deployment. An overcloud kernel, ramdisk, and full image, which form a base overcloud system that director writes to the hard disk of the node. You can obtain and install the images you need. You can also obtain and install a basic image to provision a bare OS when you do not want to run any other Red Hat OpenStack Platform (RHOSP) services or consume one of your subscription entitlements. 7.9.1. Installing the overcloud images Your Red Hat OpenStack Platform (RHOSP) installation includes packages that provide you with the overcloud-hardened-uefi-full.qcow2 overcloud image for director. This image is necessary for deployment of the overcloud with the default CPU architecture, x86-64. Importing this image into director also installs introspection images on the director PXE server. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Install the rhosp-director-images-uefi-x86_64 and rhosp-director-images-ipa-x86_64 packages: Create the images directory in the home directory of the stack user, /home/stack/images : Skip this step if the directory already exists. Extract the images archives to the images directory: Import the images into director: This command converts the image format from QCOW to RAW, and provides verbose updates on the status of the image upload progress. Verify that the overcloud images are copied to /var/lib/ironic/images/ : Verify that director has copied the introspection PXE images to /var/lib/ironic/httpboot : 7.9.2. Minimal overcloud image You can use the overcloud-minimal image to provision a bare OS where you do not want to run any other Red Hat OpenStack Platform (RHOSP) services or consume one of your subscription entitlements. Your RHOSP installation includes the overcloud-minimal package that provides you with the following overcloud images for director: overcloud-minimal overcloud-minimal-initrd overcloud-minimal-vmlinuz Procedure Log in to the undercloud as the stack user. Source the stackrc file: Install the overcloud-minimal package: Extract the images archives to the images directory in the home directory of the stack user ( /home/stack/images ): Import the images into director: The command provides updates on the status of the image upload progress: 7.10. Updating the undercloud configuration If you need to change the undercloud configuration to suit new requirements, you can make changes to your undercloud configuration after installation, edit the relevant configuration files and re-run the openstack undercloud install command. Procedure Modify the undercloud configuration files. For example, edit the undercloud.conf file and add the idrac hardware type to the list of enabled hardware types: Run the openstack undercloud install command to refresh your undercloud with the new changes: Wait until the command runs to completion. Initialize the stack user to use the command line tools,: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud: Verify that director has applied the new configuration. For this example, check the list of enabled hardware types: The undercloud re-configuration is complete. 7.11. Undercloud container registry Red Hat Enterprise Linux 9.0 no longer includes the docker-distribution package, which installed a Docker Registry v2. To maintain the compatibility and the same level of feature, the director installation creates an Apache web server with a vhost called image-serve to provide a registry. This registry also uses port 8787/TCP with SSL disabled. The Apache-based registry is not containerized, which means that you must run the following command to restart the registry: You can find the container registry logs in the following locations: /var/log/httpd/image_serve_access.log /var/log/httpd/image_serve_error.log. The image content is served from /var/lib/image-serve . This location uses a specific directory layout and apache configuration to implement the pull function of the registry REST API. The Apache-based registry does not support podman push nor buildah push commands, which means that you cannot push container images using traditional methods. To modify images during deployment, use the container preparation workflow, such as the ContainerImagePrepare parameter. To manage container images, use the container management commands: openstack tripleo container image list Lists all images stored on the registry. openstack tripleo container image show Show metadata for a specific image on the registry. openstack tripleo container image push Push an image from a remote registry to the undercloud registry. openstack tripleo container image delete Delete an image from the registry. | [
"[stack@director ~]USD cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf",
"2: em0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic em0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff",
"[ctlplane-subnet] cidr = 192.168.24.0/24 dhcp_start = 192.168.24.5 dhcp_end = 192.168.24.24 inspection_iprange = 192.168.24.100,192.168.24.120 gateway = 192.168.24.1 masquerade = true",
"dhcp_start = 172.20.0.100,172.20.0.200 dhcp_end = 172.20.0.150,172.20.0.250",
"dhcp_start = 172.20.0.100 dhcp_end = 172.20.0.250 dhcp_exclude = 172.20.0.151-172.20.0.199",
"dhcp_exclude = 172.20.0.105,172.20.0.210-172.20.0.219",
"parameter_defaults: Debug: True",
"custom_env_files = /home/stack/templates/custom-undercloud-params.yaml",
"parameter_defaults: Debug: True AdminPassword: \"myp@ssw0rd!\" AdminEmail: \"[email protected]\"",
"nova::compute::force_raw_images: False",
"nova::config::nova_config: DEFAULT/<parameter_name>: value: <parameter_value>",
"nova::config::nova_config: DEFAULT/network_allocate_retries: value: 20 ironic/serial_console_state_timeout: value: 15",
"hieradata_override = /home/stack/hieradata.yaml",
"[DEFAULT] ipv6_address_mode = <address_mode>",
"[DEFAULT] ipv6_address_mode = dhcpv6-stateful ironic_enabled_network_interfaces = neutron,flat",
"[DEFAULT] ironic_default_network_interface = neutron",
"[DEFAULT] enable_routed_networks: <true/false>",
"[DEFAULT] local_ip = <ipv6_address> undercloud_admin_host = <ipv6_address> undercloud_public_host = <ipv6_address>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end> dns_nameservers = <ipv6_dns>",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 members: - type: interface name: nic2",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 members: - name: bond-ctlplane type: linux_bond use_dhcp: false bonding_options: \"mode=active-backup\" mtu: 1500 members: - type: interface name: nic2 - type: interface name: nic3",
"[DEFAULT] net_config_override=undercloud-os-net-config.yaml",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 members: - type: interface name: nic2",
"[stack@director ~]USD openstack undercloud install",
"[stack@director ~]USD sudo podman ps",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images-uefi-x86_64 rhosp-director-images-ipa-x86_64",
"(undercloud) [stack@director ~]USD mkdir /home/stack/images",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD for i in /usr/share/rhosp-director-images/ironic-python-agent-latest.tar /usr/share/rhosp-director-images/overcloud-hardened-uefi-full-latest.tar; do tar -xvf USDi; done",
"(undercloud) [stack@director images]USD openstack overcloud image upload --image-path /home/stack/images/",
"(undercloud) [stack@director images]USD ls -l /var/lib/ironic/images/ total 1955660 -rw-r--r--. 1 root 42422 40442450944 Jan 29 11:59 overcloud-hardened-uefi-full.raw",
"(undercloud) [stack@director images]USD ls -l /var/lib/ironic/httpboot total 417296 -rwxr-xr-x. 1 root root 6639920 Jan 29 14:48 agent.kernel -rw-r--r--. 1 root root 420656424 Jan 29 14:48 agent.ramdisk -rw-r--r--. 1 42422 42422 758 Jan 29 14:29 boot.ipxe -rw-r--r--. 1 42422 42422 488 Jan 29 14:16 inspector.ipxe",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images-minimal",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD tar xf /usr/share/rhosp-director-images/overcloud-minimal-latest-17.0.tar",
"(undercloud) [stack@director images]USD openstack overcloud image upload --image-path /home/stack/images/ --image-type os --os-image-name overcloud-minimal.qcow2",
"Image \"file:///var/lib/ironic/images/overcloud-minimal.vmlinuz\" was copied. +---------------------------------------------------------+-------------------+----------+ | Path | Name | Size | +---------------------------------------------------------+-------------------+----------+ | file:///var/lib/ironic/images/overcloud-minimal.vmlinuz | overcloud-minimal | 11172880 | +---------------------------------------------------------+-------------------+----------+ Image \"file:///var/lib/ironic/images/overcloud-minimal.initrd\" was copied. +--------------------------------------------------------+-------------------+----------+ | Path | Name | Size | +--------------------------------------------------------+-------------------+----------+ | file:///var/lib/ironic/images/overcloud-minimal.initrd | overcloud-minimal | 63575845 | +--------------------------------------------------------+-------------------+----------+ Image \"file:///var/lib/ironic/images/overcloud-minimal.raw\" was copied. +-----------------------------------------------------+-------------------+------------+ | Path | Name | Size | +-----------------------------------------------------+-------------------+------------+ | file:///var/lib/ironic/images/overcloud-minimal.raw | overcloud-minimal | 2912878592 | +-----------------------------------------------------+-------------------+------------+",
"enabled_hardware_types = ipmi,redfish,idrac",
"[stack@director ~]USD openstack undercloud install",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD",
"(undercloud) [stack@director ~]USD openstack baremetal driver list +---------------------+----------------------+ | Supported driver(s) | Active host(s) | +---------------------+----------------------+ | idrac | director.example.com | | ipmi | director.example.com | | redfish | director.example.com | +---------------------+----------------------+",
"sudo systemctl restart httpd"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_installing-director-on-the-undercloud |
Chapter 4. Useful SystemTap Scripts | Chapter 4. Useful SystemTap Scripts This chapter enumerates several SystemTap scripts you can use to monitor and investigate different subsystems. All of these scripts are available in the /usr/share/systemtap/testsuite/systemtap.examples/ directory once you install the systemtap-testsuite package. 4.1. Network The following sections showcase scripts that trace network-related functions and build a profile of network activity. 4.1.1. Network Profiling This section describes how to profile network activity. Example 4.1, "nettop.stp" provides a glimpse into how much network traffic each process is generating on a machine. Example 4.1. nettop.stp #! /usr/bin/env stap global ifxmit, ifrecv global ifmerged probe netdev.transmit { ifxmit[pid(), dev_name, execname(), uid()] <<< length } probe netdev.receive { ifrecv[pid(), dev_name, execname(), uid()] <<< length } function print_activity() { printf("%5s %5s %-7s %7s %7s %7s %7s %-15s\n", "PID", "UID", "DEV", "XMIT_PK", "RECV_PK", "XMIT_KB", "RECV_KB", "COMMAND") foreach ([pid, dev, exec, uid] in ifrecv) { ifmerged[pid, dev, exec, uid] += @count(ifrecv[pid,dev,exec,uid]); } foreach ([pid, dev, exec, uid] in ifxmit) { ifmerged[pid, dev, exec, uid] += @count(ifxmit[pid,dev,exec,uid]); } foreach ([pid, dev, exec, uid] in ifmerged-) { n_xmit = @count(ifxmit[pid, dev, exec, uid]) n_recv = @count(ifrecv[pid, dev, exec, uid]) printf("%5d %5d %-7s %7d %7d %7d %7d %-15s\n", pid, uid, dev, n_xmit, n_recv, n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0, n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0, exec) } print("\n") delete ifxmit delete ifrecv delete ifmerged } probe timer.ms(5000), end, error { print_activity() } Note that the print_activity() function uses the following expressions: n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0 n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0 These expressions are if or else conditionals. The second statement is simply a more concise way of writing the following pseudo code: if n_recv != 0 then @sum(ifrecv[pid, dev, exec, uid])/1024 else 0 Example 4.1, "nettop.stp" tracks which processes are generating network traffic on the system, and provides the following information about each process: PID - the ID of the listed process. UID - user ID. A user ID of 0 refers to the root user. DEV - which ethernet device the process used to send or receive data (for example, eth0, eth1) XMIT_PK - number of packets transmitted by the process RECV_PK - number of packets received by the process XMIT_KB - amount of data sent by the process, in kilobytes RECV_KB - amount of data received by the service, in kilobytes Example 4.1, "nettop.stp" provides network profile sampling every 5 seconds. You can change this setting by editing probe timer.ms(5000) accordingly. Example 4.2, "Example 4.1, "nettop.stp" Sample Output" contains an excerpt of the output from Example 4.1, "nettop.stp" over a 20-second period: Example 4.2. Example 4.1, "nettop.stp" Sample Output 4.1.2. Tracing Functions Called in Network Socket Code This section describes how to trace functions called from the kernel's net/socket.c file. This task helps you identify, in finer detail, how each process interacts with the network at the kernel level. Example 4.3. socket-trace.stp #!/usr/bin/stap probe kernel.function("*@net/socket.c").call { printf ("%s -> %s\n", thread_indent(1), probefunc()) } probe kernel.function("*@net/socket.c").return { printf ("%s <- %s\n", thread_indent(-1), probefunc()) } Example 4.3, "socket-trace.stp" is identical to Example 3.6, "thread_indent.stp" , which was earlier used in SystemTap Functions to illustrate how thread_indent() works. Example 4.4. Example 4.3, "socket-trace.stp" Sample Output Example 4.4, "Example 4.3, "socket-trace.stp" Sample Output" contains a 3-second excerpt of the output for Example 4.3, "socket-trace.stp" . For more information about the output of this script as provided by thread_indent() , see SystemTap Functions Example 3.6, "thread_indent.stp" . 4.1.3. Monitoring Incoming TCP Connections This section illustrates how to monitor incoming TCP connections. This task is useful in identifying any unauthorized, suspicious, or otherwise unwanted network access requests in real time. Example 4.5. tcp_connections.stp #! /usr/bin/env stap probe begin { printf("%6s %16s %6s %6s %16s\n", "UID", "CMD", "PID", "PORT", "IP_SOURCE") } probe kernel.function("tcp_accept").return?, kernel.function("inet_csk_accept").return? { sock = USDreturn if (sock != 0) printf("%6d %16s %6d %6d %16s\n", uid(), execname(), pid(), inet_get_local_port(sock), inet_get_ip_source(sock)) } While Example 4.5, "tcp_connections.stp" is running, it will print out the following information about any incoming TCP connections accepted by the system in real time: Current UID CMD - the command accepting the connection PID of the command Port used by the connection IP address from which the TCP connection originated Example 4.6. Example 4.5, "tcp_connections.stp" Sample Output 4.1.4. Monitoring Network Packets Drops in Kernel The network stack in Linux can discard packets for various reasons. Some Linux kernels include a tracepoint, kernel.trace("kfree_skb") , which easily tracks where packets are discarded. Example 4.7, "dropwatch.stp" uses kernel.trace("kfree_skb") to trace packet discards; the script summarizes which locations discard packets every five-second interval. Example 4.7. dropwatch.stp #!/usr/bin/stap ############################################################ # Dropwatch.stp # Author: Neil Horman <[email protected]> # An example script to mimic the behavior of the dropwatch utility # http://fedorahosted.org/dropwatch ############################################################ # Array to hold the list of drop points we find global locations # Note when we turn the monitor on and off probe begin { printf("Monitoring for dropped packets\n") } probe end { printf("Stopping dropped packet monitor\n") } # increment a drop counter for every location we drop at probe kernel.trace("kfree_skb") { locations[USDlocation] <<< 1 } # Every 5 seconds report our drop locations probe timer.sec(5) { printf("\n") foreach (l in locations-) { printf("%d packets dropped at location %p\n", @count(locations[l]), l) } delete locations } The kernel.trace("kfree_skb") traces which places in the kernel drop network packets. The kernel.trace("kfree_skb") has two arguments: a pointer to the buffer being freed ( USDskb ) and the location in kernel code the buffer is being freed ( USDlocation ). Running the dropwatch.stp script 15 seconds would result in output similar in Example 4.8, "Example 4.7, "dropwatch.stp" Sample Output" . The output lists the number of misses for tracepoint address and the actual address. Example 4.8. Example 4.7, "dropwatch.stp" Sample Output To make the location of packet drops more meaningful, see the /boot/System.map-USD(uname -r) file. This file lists the starting addresses for each function, allowing you to map the addresses in the output of Example 4.8, "Example 4.7, "dropwatch.stp" Sample Output" to a specific function name. Given the following snippet of the /boot/System.map-USD(uname -r) file, the address 0xffffffff8024cd0f maps to the function unix_stream_recvmsg and the address 0xffffffff8044b472 maps to the function arp_rcv : | [
"#! /usr/bin/env stap global ifxmit, ifrecv global ifmerged probe netdev.transmit { ifxmit[pid(), dev_name, execname(), uid()] <<< length } probe netdev.receive { ifrecv[pid(), dev_name, execname(), uid()] <<< length } function print_activity() { printf(\"%5s %5s %-7s %7s %7s %7s %7s %-15s\\n\", \"PID\", \"UID\", \"DEV\", \"XMIT_PK\", \"RECV_PK\", \"XMIT_KB\", \"RECV_KB\", \"COMMAND\") foreach ([pid, dev, exec, uid] in ifrecv) { ifmerged[pid, dev, exec, uid] += @count(ifrecv[pid,dev,exec,uid]); } foreach ([pid, dev, exec, uid] in ifxmit) { ifmerged[pid, dev, exec, uid] += @count(ifxmit[pid,dev,exec,uid]); } foreach ([pid, dev, exec, uid] in ifmerged-) { n_xmit = @count(ifxmit[pid, dev, exec, uid]) n_recv = @count(ifrecv[pid, dev, exec, uid]) printf(\"%5d %5d %-7s %7d %7d %7d %7d %-15s\\n\", pid, uid, dev, n_xmit, n_recv, n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0, n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0, exec) } print(\"\\n\") delete ifxmit delete ifrecv delete ifmerged } probe timer.ms(5000), end, error { print_activity() }",
"n_xmit ? @sum(ifxmit[pid, dev, exec, uid])/1024 : 0 n_recv ? @sum(ifrecv[pid, dev, exec, uid])/1024 : 0",
"if n_recv != 0 then @sum(ifrecv[pid, dev, exec, uid])/1024 else 0",
"[...] PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 0 5 0 0 swapper 11178 0 eth0 2 0 0 0 synergyc PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 2886 4 eth0 79 0 5 0 cups-polld 11362 0 eth0 0 61 0 5 firefox 0 0 eth0 3 32 0 3 swapper 2886 4 lo 4 4 0 0 cups-polld 11178 0 eth0 3 0 0 0 synergyc PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 0 6 0 0 swapper 2886 4 lo 2 2 0 0 cups-polld 11178 0 eth0 3 0 0 0 synergyc 3611 0 eth0 0 1 0 0 Xorg PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 3 42 0 2 swapper 11178 0 eth0 43 1 3 0 synergyc 11362 0 eth0 0 7 0 0 firefox 3897 0 eth0 0 1 0 0 multiload-apple [...]",
"#!/usr/bin/stap probe kernel.function(\"*@net/socket.c\").call { printf (\"%s -> %s\\n\", thread_indent(1), probefunc()) } probe kernel.function(\"*@net/socket.c\").return { printf (\"%s <- %s\\n\", thread_indent(-1), probefunc()) }",
"[...] 0 Xorg(3611): -> sock_poll 3 Xorg(3611): <- sock_poll 0 Xorg(3611): -> sock_poll 3 Xorg(3611): <- sock_poll 0 gnome-terminal(11106): -> sock_poll 5 gnome-terminal(11106): <- sock_poll 0 scim-bridge(3883): -> sock_poll 3 scim-bridge(3883): <- sock_poll 0 scim-bridge(3883): -> sys_socketcall 4 scim-bridge(3883): -> sys_recv 8 scim-bridge(3883): -> sys_recvfrom 12 scim-bridge(3883):-> sock_from_file 16 scim-bridge(3883):<- sock_from_file 20 scim-bridge(3883):-> sock_recvmsg 24 scim-bridge(3883):<- sock_recvmsg 28 scim-bridge(3883): <- sys_recvfrom 31 scim-bridge(3883): <- sys_recv 35 scim-bridge(3883): <- sys_socketcall [...]",
"#! /usr/bin/env stap probe begin { printf(\"%6s %16s %6s %6s %16s\\n\", \"UID\", \"CMD\", \"PID\", \"PORT\", \"IP_SOURCE\") } probe kernel.function(\"tcp_accept\").return?, kernel.function(\"inet_csk_accept\").return? { sock = USDreturn if (sock != 0) printf(\"%6d %16s %6d %6d %16s\\n\", uid(), execname(), pid(), inet_get_local_port(sock), inet_get_ip_source(sock)) }",
"UID CMD PID PORT IP_SOURCE 0 sshd 3165 22 10.64.0.227 0 sshd 3165 22 10.64.0.227",
"#!/usr/bin/stap ############################################################ Dropwatch.stp Author: Neil Horman <[email protected]> An example script to mimic the behavior of the dropwatch utility http://fedorahosted.org/dropwatch ############################################################ Array to hold the list of drop points we find global locations Note when we turn the monitor on and off probe begin { printf(\"Monitoring for dropped packets\\n\") } probe end { printf(\"Stopping dropped packet monitor\\n\") } increment a drop counter for every location we drop at probe kernel.trace(\"kfree_skb\") { locations[USDlocation] <<< 1 } Every 5 seconds report our drop locations probe timer.sec(5) { printf(\"\\n\") foreach (l in locations-) { printf(\"%d packets dropped at location %p\\n\", @count(locations[l]), l) } delete locations }",
"Monitoring for dropped packets 51 packets dropped at location 0xffffffff8024cd0f 2 packets dropped at location 0xffffffff8044b472 51 packets dropped at location 0xffffffff8024cd0f 1 packets dropped at location 0xffffffff8044b472 97 packets dropped at location 0xffffffff8024cd0f 1 packets dropped at location 0xffffffff8044b472 Stopping dropped packet monitor",
"[...] ffffffff8024c5cd T unlock_new_inode ffffffff8024c5da t unix_stream_sendmsg ffffffff8024c920 t unix_stream_recvmsg ffffffff8024cea1 t udp_v4_lookup_longway [...] ffffffff8044addc t arp_process ffffffff8044b360 t arp_rcv ffffffff8044b487 t parp_redo ffffffff8044b48c t arp_solicit [...]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/useful-systemtap-scripts |
10.11. Teiid Management CLI | 10.11. Teiid Management CLI The AS CLI is a command line based administrative and monitoring tool for Teiid. AdminShell provides a binding into the Groovy scripting language and higher level methods that are often needed when interacting with Teiid. It is still useful to know the underlying CLI commands in many circumstances. The below is a series useful CLI commands for administering a Teiid Server. VDB Operations Source Operations Translator Operations Runtime Operations | [
"deploy adminapi-test-vdb.xml undeploy adminapi-test-vdb.xml /subsystem=teiid:restart-vdb(vdb-name=AdminAPITestVDB, vdb-version=1, model-names=TestModel) /subsystem=teiid:list-vdbs() /subsystem=teiid:get-vdb(vdb-name=AdminAPITestVDB,vdb-version=1) /subsystem=teiid:change-vdb-connection-type(vdb-name=AdminAPITestVDB, vdb-version=1,connection-type=ANY) /subsystem=teiid:add-data-role(vdb-name=AdminAPITestVDB, vdb-version=1, data-role=TestDataRole, mapped-role=test) /subsystem=teiid:remove-data-role(vdb-name=AdminAPITestVDB, vdb-version=1, data-role=TestDataRole, mapped-role=test)",
"/subsystem=teiid:add-source(vdb-name=AdminAPITestVDB, vdb-version=1, source-name=text-connector-test, translator-name=file, model-name=TestModel, ds-name=java:/test-file) /subsystem=teiid:remove-source(vdb-name=AdminAPITestVDB, vdb-version=1, source-name=text-connector-test, model-name=TestModel) /subsystem=teiid:update-source(vdb-name=AdminAPITestVDB, vdb-version=1, source-name=text-connector-test, translator-name=file, ds-name=java:/marketdata-file)",
"/subsystem=teiid:list-translators() /subsystem=teiid:get-translator(translator-name=file) /subsystem=teiid:read-translator-properties(translator-name=file,type=OVERRIDE) /subsystem=teiid:read-rar-description(rar-name=file)",
"/subsystem=teiid:workerpool-statistics() /subsystem=teiid:cache-types() /subsystem=teiid:clear-cache(cache-type=PREPARED_PLAN_CACHE) /subsystem=teiid:clear-cache(cache-type=QUERY_SERVICE_RESULT_SET_CACHE) /subsystem=teiid:clear-cache(cache-type=PREPARED_PLAN_CACHE, vdb-name=AdminAPITestVDB,vdb-version=1) /subsystem=teiid:clear-cache(cache-type=QUERY_SERVICE_RESULT_SET_CACHE, vdb-name=AdminAPITestVDB,vdb-version=1) /subsystem=teiid:cache-statistics(cache-type=PREPARED_PLAN_CACHE) /subsystem=teiid:cache-statistics(cache-type=QUERY_SERVICE_RESULT_SET_CACHE) /subsystem=teiid:engine-statistics() /subsystem=teiid:list-sessions() /subsystem=teiid:terminate-session(session=sessionid) /subsystem=teiid:list-requests() /subsystem=teiid:cancel-request(session=sessionId, execution-id=1) /subsystem=teiid:list-requests-per-session(session=sessionId) /subsystem=teiid:list-transactions() /subsystem=teiid:mark-datasource-available(ds-name=java:/accounts-ds) /subsystem=teiid:get-query-plan(session=sessionid,execution-id=1)"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/teiid_management_cli |
Chapter 25. Rack schema reference | Chapter 25. Rack schema reference Used in: KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of Rack schema properties The rack option configures rack awareness. A rack can represent an availability zone, data center, or an actual rack in your data center. The rack is configured through a topologyKey . topologyKey identifies a label on OpenShift nodes that contains the name of the topology in its value. An example of such a label is topology.kubernetes.io/zone (or failure-domain.beta.kubernetes.io/zone on older OpenShift versions), which contains the name of the availability zone in which the OpenShift node runs. You can configure your Kafka cluster to be aware of the rack in which it runs, and enable additional features such as spreading partition replicas across different racks or consuming messages from the closest replicas. For more information about OpenShift node labels, see Well-Known Labels, Annotations and Taints . Consult your OpenShift administrator regarding the node label that represents the zone or rack into which the node is deployed. 25.1. Spreading partition replicas across racks When rack awareness is configured, Streams for Apache Kafka will set broker.rack configuration for each Kafka broker. The broker.rack configuration assigns a rack ID to each broker. When broker.rack is configured, Kafka brokers will spread partition replicas across as many different racks as possible. When replicas are spread across multiple racks, the probability that multiple replicas will fail at the same time is lower than if they would be in the same rack. Spreading replicas improves resiliency, and is important for availability and reliability. To enable rack awareness in Kafka, add the rack option to the .spec.kafka section of the Kafka custom resource as shown in the example below. Example rack configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone # ... Note The rack in which brokers are running can change in some cases when the pods are deleted or restarted. As a result, the replicas running in different racks might then share the same rack. Use Cruise Control and the KafkaRebalance resource with the RackAwareGoal to make sure that replicas remain distributed across different racks. When rack awareness is enabled in the Kafka custom resource, Streams for Apache Kafka will automatically add the OpenShift preferredDuringSchedulingIgnoredDuringExecution affinity rule to distribute the Kafka brokers across the different racks. However, the preferred rule does not guarantee that the brokers will be spread. Depending on your exact OpenShift and Kafka configurations, you should add additional affinity rules or configure topologySpreadConstraints for both ZooKeeper and Kafka to make sure the nodes are properly distributed accross as many racks as possible. For more information see Configuring pod scheduling . 25.2. Consuming messages from the closest replicas Rack awareness can also be used in consumers to fetch data from the closest replica. This is useful for reducing the load on your network when a Kafka cluster spans multiple datacenters and can also reduce costs when running Kafka in public clouds. However, it can lead to increased latency. In order to be able to consume from the closest replica, rack awareness has to be configured in the Kafka cluster, and the RackAwareReplicaSelector has to be enabled. The replica selector plugin provides the logic that enables clients to consume from the nearest replica. The default implementation uses LeaderSelector to always select the leader replica for the client. Specify RackAwareReplicaSelector for the replica.selector.class to switch from the default implementation. Example rack configuration with enabled replica-aware selector apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone config: # ... replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector # ... In addition to the Kafka broker configuration, you also need to specify the client.rack option in your consumers. The client.rack option should specify the rack ID in which the consumer is running. RackAwareReplicaSelector associates matching broker.rack and client.rack IDs, to find the nearest replica and consume from it. If there are multiple replicas in the same rack, RackAwareReplicaSelector always selects the most up-to-date replica. If the rack ID is not specified, or if it cannot find a replica with the same rack ID, it will fall back to the leader replica. Figure 25.1. Example showing client consuming from replicas in the same availability zone You can also configure Kafka Connect, MirrorMaker 2 and Kafka Bridge so that connectors consume messages from the closest replicas. You enable rack awareness in the KafkaConnect , KafkaMirrorMaker2 , and KafkaBridge custom resources. The configuration does does not set affinity rules, but you can also configure affinity or topologySpreadConstraints . For more information see Configuring pod scheduling . When deploying Kafka Connect using Streams for Apache Kafka, you can use the rack section in the KafkaConnect custom resource to automatically configure the client.rack option. Example rack configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... When deploying MirrorMaker 2 using Streams for Apache Kafka, you can use the rack section in the KafkaMirrorMaker2 custom resource to automatically configure the client.rack option. Example rack configuration for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... When deploying Kafka Bridge using Streams for Apache Kafka, you can use the rack section in the KafkaBridge custom resource to automatically configure the client.rack option. Example rack configuration for Kafka Bridge apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge # ... spec: # ... rack: topologyKey: topology.kubernetes.io/zone # ... 25.3. Rack schema properties Property Property type Description topologyKey string A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set a broker's broker.rack config, and the client.rack config for Kafka Connect or MirrorMaker 2. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone config: # replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 spec: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # rack: topologyKey: topology.kubernetes.io/zone #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-rack-reference |
Chapter 3. Python Examples | Chapter 3. Python Examples 3.1. Overview This section provides examples demonstrating the steps to create a virtual machine within a basic Red Hat Virtualization environment, using the Python SDK. These examples use the ovirtsdk Python library provided by the ovirt-engine-sdk-python package. This package is available to systems attached to a Red Hat Virtualization subscription pool in Red Hat Subscription Manager. See Installing the Software Development Kit for more information on subscribing your system(s) to download the software. You will also need: A networked installation of Red Hat Virtualization Manager. A networked and configured Red Hat Virtualization Host. An ISO image file containing an operating system for installation on a virtual machine. A working understanding of both the logical and physical objects that make up a Red Hat Virtualization environment. A working understanding of the Python programming language. The examples include placeholders for authentication details ( admin@internal for user name, and password for password). Replace the placeholders with the authentication requirements of your environment. Red Hat Virtualization Manager generates a globally unique identifier (GUID) for the id attribute for each resource. Identifier codes in these examples differ from the identifier codes in your Red Hat Virtualization environment. The examples contain only basic exception and error handling logic. For more information on the exception handling specific to the SDK, see the pydoc for the ovirtsdk.infrastructure.errors module: USD pydoc ovirtsdk.infrastructure.errors 3.2. Connecting to the Red Hat Virtualization Manager in Version 4 To connect to the Red Hat Virtualization Manager, you must create an instance of the Connection class from the ovirtsdk4.sdk module by importing the class at the start of the script: import ovirtsdk4 as sdk The constructor of the Connection class takes a number of arguments. Supported arguments are: url A string containing the base URL of the Manager, such as https://server.example.com/ovirt-engine/api . username Specifies the user name to connect, such as admin@internal . This parameter is mandatory. password Specifies the password for the user name provided by the username parameter. This parameter is mandatory. token An optional token to access the API, instead of a user name and password. If the token parameter is not specified, the SDK will create one automatically. insecure A Boolean flag that indicates whether the server's TLS certificate and host name should be checked. ca_file A PEM file containing the trusted CA certificates. The certificate presented by the server will be verified using these CA certificates. If ca_file parameter is not set, the system-wide CA certificate store is used. debug A Boolean flag indicating whether debug output should be generated. If the value is True and the log parameter is not None , the data sent to and received from the server will be written to the log. Note User names and passwords are written to the debug log, so handle it with care. Compression is disabled in debug mode, which means that debug messages are sent as plain text. log The logger where the log messages will be written. kerberos A Boolean flag indicating whether Kerberos authentication should be used instead of the default basic authentication. timeout The maximum total time to wait for the response, in seconds. A value of 0 (default) means to wait forever. If the timeout expires before the response is received, an exception is raised. compress A Boolean flag indicating whether the SDK should ask the server to send compressed responses. The default is True . This is a hint for the server, which may return uncompressed data even when this parameter is set to True . Compression is disabled in debug mode, which means that debug messages are sent as plain text. sso_url A string containing the base SSO URL of the server. The default SSO URL is computed from the url if no sso_url is provided. sso_revoke_url A string containing the base URL of the SSO revoke service. This needs to be specified only when using an external authentication service. By default, this URL is automatically calculated from the value of the url parameter, so that SSO token revoke will be performed using the SSO service, which is part of the Manager. sso_token_name The token name in the JSON SSO response returned from the SSO server. Default value is access_token . headers A dictionary with headers, which should be sent with every request. connections The maximum number of connections to open to the host. If the value is 0 (default), the number of connections is unlimited. pipeline The maximum number of requests to put in an HTTP pipeline without waiting for the response. If the value is 0 (default), pipelining is disabled. import ovirtsdk4 as sdk # Create a connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) connection.test() print("Connected successfully!") connection.close() For a full list of supported methods, you can generate the documentation for the ovirtsdk.api module on the Manager machine: USD pydoc ovirtsdk.api 3.3. Listing Data Centers The datacenters collection contains all the data centers in the environment. Example 3.1. Listing data centers This example lists the data centers in the datacenters collection and output some basic information about each data center in the collection. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) dcs_service = connection.system_service().dcs_service() dcs = dcs_service.list() for dc in dcs: print("%s (%s)" % (dc.name, dc.id)) connection.close() In an environment where only the Default data center exists, and it is not activated, the examples output the text: Default (00000000-0000-0000-0000-000000000000) 3.4. Listing Clusters The clusters collection contains all clusters in the environment. Example 3.2. Listing clusters This example lists the clusters in the clusters collection and output some basic information about each cluster in the collection. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) cls_service = connection.system_service().clusters_service() cls = cls_service.list() for cl in cls: print("%s (%s)" % (cl.name, cl.id)) connection.close() In an environment where only the Default cluster exists, the examples output the text: Default (00000000-0000-0000-0000-000000000000) 3.5. Listing Hosts The hosts collection contains all hosts in the environment. Example 3.3. Listing hosts This example lists the hosts in the hosts collection and their IDs. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) host_service = connection.system_service().hosts_service() hosts = host_service.list() for host in hosts: print("%s (%s)" % (host.name, host.id)) connection.close() In an environment where only one host, MyHost , has been attached, the examples output the text: MyHost (00000000-0000-0000-0000-000000000000) 3.6. Listing Logical Networks The networks collection contains all logical networks in the environment. Example 3.4. Listing logical networks This example lists the logical networks in the networks collection and outputs some basic information about each network in the collection. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) nws_service = connection.system_service().networks_service() nws = nws_service.list() for nw in nws: print("%s (%s)" % (nw.name, nw.id)) connection.close() In an environment where only the default management network exists, the examples output the text: ovirtmgmt (00000000-0000-0000-0000-000000000000) 3.7. Listing Virtual Machines and Total Disk Size The vms collection contains a disks collection that describes the details of each disk attached to a virtual machine. Example 3.5. Listing virtual machines and total disk size This example prints a list of virtual machines and their total disk size in bytes: V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) vms_service = connection.system_service().vms_service() virtual_machines = vms_service.list() if len(virtual_machines) > 0: print("%-30s %s" % ("Name", "Disk Size")) print("==================================================") for virtual_machine in virtual_machines: vm_service = vms_service.vm_service(virtual_machine.id) disk_attachments = vm_service.disk_attachments_service().list() disk_size = 0 for disk_attachment in disk_attachments: disk = connection.follow_link(disk_attachment.disk) disk_size += disk.provisioned_size print("%-30s: %d" % (virtual_machine.name, disk_size)) The examples output the virtual machine names and their disk sizes: Name Disk Size ================================================== vm1 50000000000 3.8. Creating NFS Data Storage When a Red Hat Virtualization environment is first created, it is necessary to define at least a data storage domain and an ISO storage domain. The data storage domain stores virtual disks while the ISO storage domain stores the installation media for guest operating systems. The storagedomains collection contains all the storage domains in the environment and can be used to add and remove storage domains. Note The code provided in this example assumes that the remote NFS share has been pre-configured for use with Red Hat Virtualization. See the Administration Guide for more information on preparing NFS shares. Example 3.6. Creating NFS data storage This example adds an NFS data domain to the storagedomains collection. V4 For V4, the add method is used to add the new storage domain and the types class is used to pass the following parameters: A name for the storage domain. The data center object that was retrieved from the datacenters collection. The host object that was retrieved from the hosts collection. The type of storage domain being added ( data , iso , or export ). The storage format to use ( v1 , v2 , or v3 ). import ovirtsdk4 as sdk import ovirtsdk4.types as types # Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() # Create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='mydata', description='My data', type=types.StorageDomainType.DATA, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='_FQDN_', path='/nfs/ovirt/path/to/mydata', ), ), ) # Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print("Storage Domain '%s' added (%s)." % (sd.name(), sd.id())) connection.close() If the add method call is successful, the examples output the text: Storage Domain 'mydata' added (00000000-0000-0000-0000-000000000000). 3.9. Creating NFS ISO Storage To create a virtual machine, you need installation media for the guest operating system. The installation media are stored in an ISO storage domain. Note The code provided in this example assumes that the remote NFS share has been pre-configured for use with Red Hat Virtualization. See the Administration Guide for more information on preparing NFS shares. Example 3.7. Creating NFS ISO storage This example adds an NFS ISO domain to the storagedomains collection. V4 For V4, the add method is used to add the new storage domain and the types class is used to pass the following parameters: A name for the storage domain. The data center object that was retrieved from the datacenters collection. The host object that was retrieved from the hosts collection. The type of storage domain being added ( data , iso , or export ). The storage format to use ( v1 , v2 , or v3 ). import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() # Use the "add" method to create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='myiso', description='My ISO', type=types.StorageDomainType.ISO, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='FQDN', path='/nfs/ovirt/path/to/myiso', ), ), ) # Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print("Storage Domain '%s' added (%s)." % (sd.name(), sd.id())) # Close the connection to the server: connection.close() If the add method call is successful, the examples output the text: Storage Domain 'myiso' added (00000000-0000-0000-0000-000000000000). 3.10. Attaching a Storage Domain to a Data Center Once you have added a storage domain to Red Hat Virtualization, you must attach it to a data center and activate it before it will be ready for use. Example 3.8. Attaching a storage domain to a data center This example attaches an existing NFS storage domain, mydata , to the an existing data center, Default . The attach action is facilitated by the add method of the data center's storagedomains collection. These examples may be used to attach both data and ISO storage domains. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types # Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the service that manages the storage domains and use it to # search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] # Locate the service that manages the data centers and use it to # search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] # Locate the service that manages the data center where we want to # attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) # Locate the service that manages the storage domains that are attached # to the data centers: attached_sds_service = dc_service.storage_domains_service() # Use the "add" method of service that manages the attached storage # domains to attach it: attached_sds_service.add( types.StorageDomain( id=sd.id, ), ) # Wait until the storage domain is active: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print("Attached data storage domain '%s' to data center '%s' (Status: %s)." % (sd.name(), dc.name(), sd.status.state())) # Close the connection to the server: connection.close() If the calls to the add methods are successful, the examples output the following text: Attached data storage domain 'data1' to data center 'Default' (Status: maintenance). Status: maintenance indicates that the storage domains still need to be activated. 3.11. Activating a Storage Domain Once you have added a storage domain to Red Hat Virtualization and attached it to a data center, you must activate it before it will be ready for use. Example 3.9. Activating a storage domain This example activates an NFS storage domain, mydata , attached to the data center, Default . The activate action is facilitated by the activate method of the storage domain. V4 import ovirtsdk4 as sdk connection = sdk.Connection url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the service that manages the storage domains and use it to # search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] # Locate the service that manages the data centers and use it to # search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] # Locate the service that manages the data center where we want to # attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) # Locate the service that manages the storage domains that are attached # to the data centers: attached_sds_service = dc_service.storage_domains_service() # Activate storage domain: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) attached_sd_service.activate() # Wait until the storage domain is active: while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print("Attached data storage domain '%s' to data center '%s' (Status: %s)." % (sd.name(), dc.name(), sd.status.state())) # Close the connection to the server: connection.close() If the activate requests are successful, the examples output the text: Activated storage domain 'mydata' in data center 'Default' (Status: active). Status: active indicates that the storage domains have been activated. 3.12. Listing Files in an ISO Storage Domain The storagedomains collection contains a files collection that describes the files in a storage domain. Example 3.10. Listing Files in an ISO Storage Domain This example prints a list of the ISO files in each ISO storage domain: V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) storage_domains_service = connection.system_service().storage_domains_service() storage_domains = storage_domains_service.list() for storage_domain in storage_domains: if(storage_domain.type == types.StorageDomainType.ISO): print(storage_domain.name + ":\n") files = storage_domain.files_service().list() for file in files: print("%s" % file.name + "\n") connection.close() The examples output the text: 3.13. Creating a Virtual Machine Virtual machine creation is performed in several steps. The first step, covered here, is to create the virtual machine object itself. Example 3.11. Creating a virtual machine This example creates a virtual machine, vm1 , with the following requirements: 512 MB of memory, expressed in bytes. Attached to the Default cluster, and therefore the Default data center. Based on the default Blank template. Boots from the virtual hard disk drive. V4 In V4, the options are added as types , using the add method. import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Use the "add" method to create a new virtual machine: vms_service.add( types.Vm( name='vm1', memory = 512*1024*1024 cluster=types.Cluster( name='Default', ), template=types.Template( name='Blank', ), os=types.OperatingSystem(boot=types.Boot(devices=[types.BootDevice.HD)] ), ) print("Virtual machine '%s' added." % vm.name) # Close the connection to the server: connection.close() If the add request is successful, the examples output the text: Virtual machine 'vm1' added. 3.14. Creating a Virtual NIC To ensure that a newly created virtual machine has network access, you must create and attach a virtual NIC. Example 3.12. Creating a virtual NIC This example creates a NIC, nic1 , and attach it to a virtual machine, vm1 . The NIC in this example is a virtio network device and attached to the ovirtmgmt management network. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the virtual machines service and use it to find the virtual # machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the network interface cards of the # virtual machine: nics_service = vms_service.vm_service(vm.id).nics_service() # Locate the vnic profiles service and use it to find the ovirmgmt # network's profile id: profiles_service = connection.system_service().vnic_profiles_service() profile_id = None for profile in profiles_service.list(): if profile.name == 'ovirtmgmt': profile_id = profile.id break # Use the "add" method of the network interface cards service to add the # new network interface card: nic = nics_service.add( types.Nic( name='nic1', interface=types.NicInterface.VIRTIO, vnic_profile=types.VnicProfile(id=profile_id), ), ) print("Network interface '%s' added to '%s'." % (nic.name, vm.name)) connection.close() If the add request is successful, the examples output the text: Network interface 'nic1' added to 'vm1'. 3.15. Creating a Virtual Machine Disk To ensure that a newly created virtual machine has access to persistent storage, you must create and attach a disk. Example 3.13. Creating a virtual machine disk This example creates an 8 GB virtio disk and attach it to a virtual machine, vm1 . The disk has the following requirements: Stored on the storage domain named data1 . 8 GB in size. system type disk (as opposed to data ). virtio storage device. COW format. Marked as a usable boot device. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the virtual machines service and use it to find the virtual # machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the disk attachments of the virtual # machine: disk_attachments_service = vms_service.vm_service(vm.id).disk_attachments_service() # Use the "add" method of the disk attachments service to add the disk. # Note that the size of the disk, the `provisioned_size` attribute, is # specified in bytes, so to create a disk of 10 GiB the value should # be 10 * 2^30. disk_attachment = disk_attachments_service.add( types.DiskAttachment( disk=types.Disk( format=types.DiskFormat.COW, provisioned_size=8*1024*1024, storage_domains=[ types.StorageDomain( name='data1', ), ], ), interface=types.DiskInterface.VIRTIO, bootable=True, active=True, ), ) # Wait until the disk status is OK: disks_service = connection.system_service().disks_service() disk_service = disks_service.disk_service(disk_attachment.disk.id) while True: time.sleep(5) disk = disk_service.get() if disk.status == types.DiskStatus.OK: break print("Disk '%s' added to '%s'." % (disk.name(), vm.name())) # Close the connection to the server: connection.close() If the add request is successful, the examples output the text: Disk 'vm1_Disk1' added to 'vm1'. 3.16. Attaching an ISO Image to a Virtual Machine To install a guest operating system on a newly created virtual machine, you must attach an ISO file containing the operating system installation media. To locate the ISO file, see Listing Files in an ISO Storage Domain . Example 3.14. Attaching an ISO image to a virtual machine This example attaches my_iso_file.iso to the vm1 virtual machine, using the add method of the virtual machine's cdroms collection. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() # Get the first CDROM: cdrom = cdroms_service.list()[0] # Locate the service that manages the CDROM device found in step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) # Change the CD of the VM to 'my_iso_file.iso'. By default the # operation permanently changes the disk that is visible to the # virtual machine after the boot, but has no effect # on the currently running virtual machine. If you want to change the # disk that is visible to the current running virtual machine, change # the `current` parameter's value to `True`. cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='my_iso_file.iso' ), ), current=False, ) print("Attached CD to '%s'." % vm.name()) # Close the connection to the server: connection.close() If the add request is successful, the examples output the text: Attached CD to 'vm1'. Example 3.15. Ejecting a cdrom from a virtual machine This example ejects an ISO image from a virtual machine's cdrom collection. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service() # Get the first found CDROM: cdrom = cdroms_service.list()[0] # Locate the service that manages the CDROM device found in step # of the VM: cdrom_service = cdroms_service.cdrom_service(cdrom.id) cdrom_service.remove() print("Removed CD from '%s'." % vm.name()) connection.close() If the delete or remove request is successful, the examples output the text: Removed CD from 'vm1'. 3.17. Detaching a Disk You can detach a disk from a virtual machine. Detaching a disk V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) attachments_service = vm_service.disk_attachments_service() attachment = ( (a for a in disk_attachments if a.disk.id == disk.id), None ) # Remove the attachment. The default behavior is that the disk is detached # from the virtual machine, but not deleted from the system. If you wish to # delete the disk, change the detach_only parameter to "False". attachment.remove(detach_only=True) print("Detached disk %s successfully!" % attachment) # Close the connection to the server: connection.close() If the delete or remove request is successful, the examples output the text: Detached disk vm1_disk1 successfully! 3.18. Starting a Virtual Machine You can start a virtual machine. Example 3.16. Starting a virtual machine This example starts the virtual machine using the start method. V4 import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine, as that is where # the action methods are defined: vm_service = vms_service.vm_service(vm.id) # Call the "start" method of the service to start it: vm_service.start() # Wait until the virtual machine is up: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print("Started '%s'." % vm.name()) # Close the connection to the server: connection.close() If the start request is successful, the examples output the text: Started 'vm1'. The UP status indicates that the virtual machine is running. 3.19. Starting a Virtual Machine with Overridden Parameters You can start a virtual machine, overriding its default parameters. Example 3.17. Starting a virtual machine with overridden parameters This example boots a virtual machine with a Windows ISO and attach the virtio-win_x86.vfd floppy disk, which contains Windows drivers. This action is equivalent to using the Run Once window in the Administration Portal to start a virtual machine. V4 import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() # Get the first CDROM: cdrom = cdroms_service.list()[0] # Locate the service that manages the CDROM device found in step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) # Change the CD of the VM to 'windows_example.iso': cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='windows_example.iso' ), ), current=False, ) # Call the "start" method of the service to start it: vm_service.start( vm=types.Vm( os=types.OperatingSystem( boot=types.Boot( devices=[ types.BootDevice.CDROM, ] ) ), ) ) # Wait until the virtual machine's status is "UP": while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print("Started '%s'." % vm.name()) # Close the connection to the server: connection.close() Note The CD image and floppy disk file must be available to the virtual machine. See Uploading Images to a Data Storage Domain for details. 3.20. Starting a Virtual Machine with Cloud-Init You can start a virtual machine with a specific configuration, using the Cloud-Init tool. Example 3.18. Starting a virtual machine with Cloud-Init This example shows you how to start a virtual machine using the Cloud-Init tool to set a host name and a static IP for the eth0 interface. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search = 'name=vm1')[0] # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Start the virtual machine enabling cloud-init and providing the # password for the `root` user and the network configuration: vm_service.start( use_cloud_init=True, vm=types.Vm( initialization=types.Initialization( user_name='root', root_password='password', host_name='MyHost.example.com', nic_configurations=[ types.NicConfiguration( name='eth0', on_boot=True, boot_protocol=types.BootProtocol.STATIC, ip=types.Ip( version=types.IpVersion.V4, address='10.10.10.1', netmask='255.255.255.0', gateway='10.10.10.1' ) ) ) ) ) # Close the connection to the server: connection.close() 3.21. Checking System Events Red Hat Virtualization Manager records and logs many system events. These event logs are accessible through the user interface, the system log files, and using the API. The ovirtsdk library exposes events using the events collection. Example 3.19. Checking system events In this example the events collection is listed. The query parameter of the list method is used to ensure that all available pages of results are returned. By default the list method returns only the first page of results, which is 100 records in length. The returned list is sorted in reverse chronological order, to display the events in the order in which they occurred. V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Find the service that manages the collection of events: events_service = connection.system_service().events_service() page_number = 1 events = events_service.list(search='page %s' % page_number) while events: for event in events: print( "%s %s CODE %s - %s" % ( event.time, event.severity, event.code, event.description, ) ) page_number = page_number + 1 events = events_service.list(search='page %s' % page_number) # Close the connection to the server: connection.close() These examples output events in the following format: YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 30 - User admin@internal logged in. YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 153 - VM vm1 was started by admin@internal (Host: MyHost). YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 30 - User admin@internal logged in. | [
"pydoc ovirtsdk.infrastructure.errors",
"import ovirtsdk4 as sdk",
"import ovirtsdk4 as sdk Create a connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) connection.test() print(\"Connected successfully!\") connection.close()",
"pydoc ovirtsdk.api",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) dcs_service = connection.system_service().dcs_service() dcs = dcs_service.list() for dc in dcs: print(\"%s (%s)\" % (dc.name, dc.id)) connection.close()",
"Default (00000000-0000-0000-0000-000000000000)",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) cls_service = connection.system_service().clusters_service() cls = cls_service.list() for cl in cls: print(\"%s (%s)\" % (cl.name, cl.id)) connection.close()",
"Default (00000000-0000-0000-0000-000000000000)",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) host_service = connection.system_service().hosts_service() hosts = host_service.list() for host in hosts: print(\"%s (%s)\" % (host.name, host.id)) connection.close()",
"MyHost (00000000-0000-0000-0000-000000000000)",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) nws_service = connection.system_service().networks_service() nws = nws_service.list() for nw in nws: print(\"%s (%s)\" % (nw.name, nw.id)) connection.close()",
"ovirtmgmt (00000000-0000-0000-0000-000000000000)",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) vms_service = connection.system_service().vms_service() virtual_machines = vms_service.list() if len(virtual_machines) > 0: print(\"%-30s %s\" % (\"Name\", \"Disk Size\")) print(\"==================================================\") for virtual_machine in virtual_machines: vm_service = vms_service.vm_service(virtual_machine.id) disk_attachments = vm_service.disk_attachments_service().list() disk_size = 0 for disk_attachment in disk_attachments: disk = connection.follow_link(disk_attachment.disk) disk_size += disk.provisioned_size print(\"%-30s: %d\" % (virtual_machine.name, disk_size))",
"Name Disk Size ================================================== vm1 50000000000",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() Create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='mydata', description='My data', type=types.StorageDomainType.DATA, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='_FQDN_', path='/nfs/ovirt/path/to/mydata', ), ), ) Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print(\"Storage Domain '%s' added (%s).\" % (sd.name(), sd.id())) connection.close()",
"Storage Domain 'mydata' added (00000000-0000-0000-0000-000000000000).",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() Use the \"add\" method to create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='myiso', description='My ISO', type=types.StorageDomainType.ISO, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='FQDN', path='/nfs/ovirt/path/to/myiso', ), ), ) Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print(\"Storage Domain '%s' added (%s).\" % (sd.name(), sd.id())) Close the connection to the server: connection.close()",
"Storage Domain 'myiso' added (00000000-0000-0000-0000-000000000000).",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the service that manages the storage domains and use it to search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] Locate the service that manages the data centers and use it to search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] Locate the service that manages the data center where we want to attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) Locate the service that manages the storage domains that are attached to the data centers: attached_sds_service = dc_service.storage_domains_service() Use the \"add\" method of service that manages the attached storage domains to attach it: attached_sds_service.add( types.StorageDomain( id=sd.id, ), ) Wait until the storage domain is active: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print(\"Attached data storage domain '%s' to data center '%s' (Status: %s).\" % (sd.name(), dc.name(), sd.status.state())) Close the connection to the server: connection.close()",
"Attached data storage domain 'data1' to data center 'Default' (Status: maintenance).",
"import ovirtsdk4 as sdk connection = sdk.Connection url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the service that manages the storage domains and use it to search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] Locate the service that manages the data centers and use it to search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] Locate the service that manages the data center where we want to attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) Locate the service that manages the storage domains that are attached to the data centers: attached_sds_service = dc_service.storage_domains_service() Activate storage domain: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) attached_sd_service.activate() Wait until the storage domain is active: while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print(\"Attached data storage domain '%s' to data center '%s' (Status: %s).\" % (sd.name(), dc.name(), sd.status.state())) Close the connection to the server: connection.close()",
"Activated storage domain 'mydata' in data center 'Default' (Status: active).",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) storage_domains_service = connection.system_service().storage_domains_service() storage_domains = storage_domains_service.list() for storage_domain in storage_domains: if(storage_domain.type == types.StorageDomainType.ISO): print(storage_domain.name + \":\\n\") files = storage_domain.files_service().list() for file in files: print(\"%s\" % file.name + \"\\n\") connection.close()",
"ISO_storage_domain: file1 file2",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Use the \"add\" method to create a new virtual machine: vms_service.add( types.Vm( name='vm1', memory = 512*1024*1024 cluster=types.Cluster( name='Default', ), template=types.Template( name='Blank', ), os=types.OperatingSystem(boot=types.Boot(devices=[types.BootDevice.HD)] ), ) print(\"Virtual machine '%s' added.\" % vm.name) Close the connection to the server: connection.close()",
"Virtual machine 'vm1' added.",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the virtual machines service and use it to find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the network interface cards of the virtual machine: nics_service = vms_service.vm_service(vm.id).nics_service() Locate the vnic profiles service and use it to find the ovirmgmt network's profile id: profiles_service = connection.system_service().vnic_profiles_service() profile_id = None for profile in profiles_service.list(): if profile.name == 'ovirtmgmt': profile_id = profile.id break Use the \"add\" method of the network interface cards service to add the new network interface card: nic = nics_service.add( types.Nic( name='nic1', interface=types.NicInterface.VIRTIO, vnic_profile=types.VnicProfile(id=profile_id), ), ) print(\"Network interface '%s' added to '%s'.\" % (nic.name, vm.name)) connection.close()",
"Network interface 'nic1' added to 'vm1'.",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the virtual machines service and use it to find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the disk attachments of the virtual machine: disk_attachments_service = vms_service.vm_service(vm.id).disk_attachments_service() Use the \"add\" method of the disk attachments service to add the disk. Note that the size of the disk, the `provisioned_size` attribute, is specified in bytes, so to create a disk of 10 GiB the value should be 10 * 2^30. disk_attachment = disk_attachments_service.add( types.DiskAttachment( disk=types.Disk( format=types.DiskFormat.COW, provisioned_size=8*1024*1024, storage_domains=[ types.StorageDomain( name='data1', ), ], ), interface=types.DiskInterface.VIRTIO, bootable=True, active=True, ), ) Wait until the disk status is OK: disks_service = connection.system_service().disks_service() disk_service = disks_service.disk_service(disk_attachment.disk.id) while True: time.sleep(5) disk = disk_service.get() if disk.status == types.DiskStatus.OK: break print(\"Disk '%s' added to '%s'.\" % (disk.name(), vm.name())) Close the connection to the server: connection.close()",
"Disk 'vm1_Disk1' added to 'vm1'.",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() Get the first CDROM: cdrom = cdroms_service.list()[0] Locate the service that manages the CDROM device found in previous step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) Change the CD of the VM to 'my_iso_file.iso'. By default the operation permanently changes the disk that is visible to the virtual machine after the next boot, but has no effect on the currently running virtual machine. If you want to change the disk that is visible to the current running virtual machine, change the `current` parameter's value to `True`. cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='my_iso_file.iso' ), ), current=False, ) print(\"Attached CD to '%s'.\" % vm.name()) Close the connection to the server: connection.close()",
"Attached CD to 'vm1'.",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service() Get the first found CDROM: cdrom = cdroms_service.list()[0] Locate the service that manages the CDROM device found in previous step of the VM: cdrom_service = cdroms_service.cdrom_service(cdrom.id) cdrom_service.remove() print(\"Removed CD from '%s'.\" % vm.name()) connection.close()",
"Removed CD from 'vm1'.",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) attachments_service = vm_service.disk_attachments_service() attachment = next( (a for a in disk_attachments if a.disk.id == disk.id), None ) Remove the attachment. The default behavior is that the disk is detached from the virtual machine, but not deleted from the system. If you wish to delete the disk, change the detach_only parameter to \"False\". attachment.remove(detach_only=True) print(\"Detached disk %s successfully!\" % attachment) Close the connection to the server: connection.close()",
"Detached disk vm1_disk1 successfully!",
"import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine, as that is where the action methods are defined: vm_service = vms_service.vm_service(vm.id) Call the \"start\" method of the service to start it: vm_service.start() Wait until the virtual machine is up: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print(\"Started '%s'.\" % vm.name()) Close the connection to the server: connection.close()",
"Started 'vm1'.",
"import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() Get the first CDROM: cdrom = cdroms_service.list()[0] Locate the service that manages the CDROM device found in previous step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) Change the CD of the VM to 'windows_example.iso': cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='windows_example.iso' ), ), current=False, ) Call the \"start\" method of the service to start it: vm_service.start( vm=types.Vm( os=types.OperatingSystem( boot=types.Boot( devices=[ types.BootDevice.CDROM, ] ) ), ) ) Wait until the virtual machine's status is \"UP\": while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print(\"Started '%s'.\" % vm.name()) Close the connection to the server: connection.close()",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search = 'name=vm1')[0] Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Start the virtual machine enabling cloud-init and providing the password for the `root` user and the network configuration: vm_service.start( use_cloud_init=True, vm=types.Vm( initialization=types.Initialization( user_name='root', root_password='password', host_name='MyHost.example.com', nic_configurations=[ types.NicConfiguration( name='eth0', on_boot=True, boot_protocol=types.BootProtocol.STATIC, ip=types.Ip( version=types.IpVersion.V4, address='10.10.10.1', netmask='255.255.255.0', gateway='10.10.10.1' ) ) ) ) ) Close the connection to the server: connection.close()",
"import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Find the service that manages the collection of events: events_service = connection.system_service().events_service() page_number = 1 events = events_service.list(search='page %s' % page_number) while events: for event in events: print( \"%s %s CODE %s - %s\" % ( event.time, event.severity, event.code, event.description, ) ) page_number = page_number + 1 events = events_service.list(search='page %s' % page_number) Close the connection to the server: connection.close()",
"YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 30 - User admin@internal logged in. YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 153 - VM vm1 was started by admin@internal (Host: MyHost). YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 30 - User admin@internal logged in."
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/python_sdk_guide/chap-Python_Examples |
Chapter 2. Differences from upstream OpenJDK 21 | Chapter 2. Differences from upstream OpenJDK 21 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 21 changes: FIPS support. Red Hat build of OpenJDK 21 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 21 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 21 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificates from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.1/rn-openjdk-diff-from-upstream |
13. Compiler and Tools | 13. Compiler and Tools 13.1. SystemTap SystemTap is a tracing and probing tool that allows users to study and monitor the activities of the operating system (particularly, the kernel) in fine detail. It provides information similar to the output of tools like netstat, ps, top, and iostat; however, SystemTap is designed to provide more filtering and analysis options for collected information. Red Hat Enterprise Linux 6 features SystemTap version 1.1, which introduces many new features and enhancements, including: Improved support for user-space probing. Support for probing C++ programs with native C++ syntax. A more secure script-compile server. The new unprivileged mode, allowing non-root users to use SystemTap. Important Unprivileged mode is new and experimental. The stap-server facility on which it relies is undergoing work for security improvements and should be deployed with care on a trustworthy network. 13.2. OProfile OProfile is a system-wide profiler for Linux systems. The profiling runs transparently in the background and profile data can be collected at any time. Red Hat Enterprise Linux 6 features version 0.9.5 of OProfile, adding support for new Intel and AMD processors. 13.3. GNU Compiler Collection (GCC) The GNU Compiler Collection (GCC) includes, among others, C, C++, and Java GNU compilers and related support libraries. Red Hat Enterprise Linux 6 features version 4.4 of GCC, which includes the following features and enhancements: Conformance to version 3.0 of the Open Multi-Processing (OpenMP) application programming interface (API). Additional C++ libraries to utilize OpenMP threads Futher implementations of the ISO C++ standard draft (C++0x) Introduction of variable tracking assignments to improve debugging using the GNU Project Debugger (GDB) and SystemTap. More information about the improvements implemented in GCC 4.4 is available from the GCC website. 13.4. GNU C Library (glibc) The GNU C Library (glibc) packages contain the standard C libraries used by multiple programs on Red Hat Enterprise Linux. These packages contains the standard C and the standard math libraries. Without these two libraries, the Linux system cannot function properly. Red Hat Enterprise Linux 6 features version 2.11 of glibc, providing many features and enhancements, including: An enhanced dynamic memory allocation (malloc) behaviour enabling higher scalability across many sockets and cores. This is achieved by assigning threads their own memory pools and by avoiding locking in some situations. The amount of additional memory used for the memory pools (if any) can be controlled using the environment variables MALLOC_ARENA_TEST and MALLOC_ARENA_MAX. MALLOC_ARENA_TEST specifies that a test for the number of cores is performed once the number of memory pools reaches this value. MALLOC_ARENA_MAX sets the maximum number of memory pools used, regardless of the number of cores. Improved efficiency when using condition variables (condvars) with priority inheritance (PI) mutual exclusion (mutex) operations by utilizing support in the kernel for PI fast userspace mutexes. Optimized string operations on the x86_64 architecture. The getaddrinfo() function now has support for the Datagram Congestion Control Protocol (DCCP) and the UDP-Lite protocol. Additionally, getaddrinfo() now has the ability to look up IPv4 and IPv6 addresses simultaneously. 13.5. GNU Project debugger (GDB) The GNU Project debugger (normally referred to as GDB) debugs programs written in C, C++, and other languages by executing them in a controlled fashion, and then printing out their data. Red Hat Enterprise Linux 6 features version 7.0 of GDB. Python Scripting This updated version of GDB introduces the new Python API, allowing GDB to be automated using scripts written in the Python Programming Language. One notable feature of the Python API is the ability to format GDB output (normally referred to as pretty-printing) using Python scripts. Previously, pretty-printing in GDB was configured using a standard set of print settings. The ability to create custom pretty-printer scripts gives the user control of the way GDB displays information for specific applications. Red Hat Enterprise Linux will ship with a complete suite of pretty-printer scripts for the GNU Standard C++ Library ( libstdc++ ). Enhanced C++ support Support for the C++ programming language in GDB has been improved. Notable improvements include: Many improvements to expression parsing. Better handling of type names. The need for extraneous quoting has nearly been eliminated "" and other stepping commands work properly even when the inferior throws an exception. GDB has a new "catch syscall" command. This can be used to stop the inferior whenever it makes a system call. Independent thread debugging Thread execution now permits debugging threads individually and independently of each other; enabled by new settings "set target-async" and "set non-stop". | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/compiler |
Installing on bare metal | Installing on bare metal OpenShift Container Platform 4.15 Installing OpenShift Container Platform on bare metal Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: \"Disabled\" watchAllNamespaces: false",
"oc create -f provisioning.yaml",
"provisioning.metal3.io/provisioning-configuration created",
"oc get pods -n openshift-machine-api",
"NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 7 next-hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5",
"oc create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending",
"oc adm certificate approve <csr_name>",
"certificatesigningrequest.certificates.k8s.io/<csr_name> approved",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd",
"--- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: \"controller1-bmc\" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api",
"oc create -f controller.yaml",
"secret/controller1-bmc created baremetalhost.metal3.io/controller1 created",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s",
"oc adm drain app1 --force --ignore-daemonsets=true",
"node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained",
"oc edit bmh -n openshift-machine-api <host_name>",
"customDeploy: method: install_coreos",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m",
"oc delete bmh -n openshift-machine-api <bmh_name>",
"oc delete node <node_name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_bare_metal/index |
Eventing | Eventing Red Hat OpenShift Serverless 1.35 Using event-driven architectures with OpenShift Serverless Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/eventing/index |
Composing a customized RHEL system image | Composing a customized RHEL system image Red Hat Enterprise Linux 8 Creating customized system images with RHEL image builder on Red Hat Enterprise Linux 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/index |
Chapter 15. Virtual builds with Red Hat Quay on OpenShift Container Platform | Chapter 15. Virtual builds with Red Hat Quay on OpenShift Container Platform Documentation for the builds feature has been moved to Builders and image automation . This chapter will be removed in a future version of Red Hat Quay. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/use_red_hat_quay/red-hat-quay-builders-enhancement |
probe::signal.do_action.return | probe::signal.do_action.return Name probe::signal.do_action.return - Examining or changing a signal action completed Synopsis signal.do_action.return Values retstr Return value as a string name Name of the probe point | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-do-action-return |
Chapter 8. OpenShift SDN network plugin | Chapter 8. OpenShift SDN network plugin 8.1. Enabling multicast for a project 8.1.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between Red Hat OpenShift Service on AWS pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis. 8.1.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin or the dedicated-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: "true" Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener | [
"oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/networking/openshift-sdn-network-plugin |
Chapter 20. Managing Guest Virtual Machines with virsh | Chapter 20. Managing Guest Virtual Machines with virsh virsh is a command-line interface tool for managing guest virtual machines, and works as the primary means of controlling virtualization on Red Hat Enterprise Linux 7. The virsh command-line tool is built on the libvirt management API, and can be used to create, deploy, and manage guest virtual machines. The virsh utility is ideal for creating virtualization administration scripts, and users without root privileges can use it in read-only mode. The virsh package is installed with yum as part of the libvirt-client package. For installation instructions, see Section 2.2.1, "Installing Virtualization Packages Manually" . For a general introduction of virsh, including a practical demonstration, see the Virtualization Getting Started Guide The remaining sections of this chapter cover the virsh command set in a logical order based on usage. Note Note that when using the help or when reading the man pages, the term 'domain' will be used instead of the term guest virtual machine. This is the term used by libvirt . In cases where the screen output is displayed and the word 'domain' is used, it will not be switched to guest or guest virtual machine. In all examples, the guest virtual machine 'guest1' will be used. You should replace this with the name of your guest virtual machine in all cases. When creating a name for a guest virtual machine you should use a short easy to remember integer (0,1,2...), a text string name, or in all cases you can also use the virtual machine's full UUID. Important It is important to note which user you are using. If you create a guest virtual machine using one user, you will not be able to retrieve information about it using another user. This is especially important when you create a virtual machine in virt-manager. The default user is root in that case unless otherwise specified. Should you have a case where you cannot list the virtual machine using the virsh list --all command, it is most likely due to you running the command using a different user than you used to create the virtual machine. See Important for more information. 20.1. Guest Virtual Machine States and Types Several virsh commands are affected by the state of the guest virtual machine: Transient - A transient guest does not survive reboot. Persistent - A persistent guest virtual machine survives reboot and lasts until it is deleted. During the life cycle of a virtual machine, libvirt will classify the guest as any of the following states: Undefined - This is a guest virtual machine that has not been defined or created. As such, libvirt is unaware of any guest in this state and will not report about guest virtual machines in this state. Shut off - This is a guest virtual machine which is defined, but is not running. Only persistent guests can be considered shut off. As such, when a transient guest virtual machine is put into this state, it ceases to exist. Running - The guest virtual machine in this state has been defined and is currently working. This state can be used with both persistent and transient guest virtual machines. Paused - The guest virtual machine's execution on the hypervisor has been suspended, or its state has been temporarily stored until it is resumed. Guest virtual machines in this state are not aware they have been suspended and do not notice that time has passed when they are resumed. Saved - This state is similar to the paused state, however the guest virtual machine's configuration is saved to persistent storage. Any guest virtual machine in this state is not aware it is paused and does not notice that time has passed once it has been restored. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-Managing_guest_virtual_machines_with_virsh |
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation | Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation To install Red Hat Virtualization Manager on a system that does not have a direct connection to the Content Delivery Network, download the required packages on a system that has internet access, then create a repository that can be shared with the offline Manager machine. The system hosting the repository must be connected to the same network as the client systems where the packages are to be installed. Prerequisites A Red Hat Enterprise Linux 8 Server installed on a system that has access to the Content Delivery Network. This system downloads all the required packages, and distributes them to your offline systems. A large amount of free disk space available. This procedure downloads a large number of packages, and requires up to 50GB of free disk space. Begin by enabling the Red Hat Virtualization Manager repositories on the online system: Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the online machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Configuring the Offline Repository Servers that are not connected to the Internet can access software repositories on other systems using File Transfer Protocol (FTP). To create the FTP repository, install and configure vsftpd on the intended Manager machine: Install the vsftpd package: # dnf install vsftpd Enable ftp access for an anonymous user to have access to rpm files from the intended Manager machine, and to keep it secure, disable write on ftp server. Edit the /etc/vsftpd/vsftpd.conf file and change the values for anonymous_enable and write_enable as follows: anonymous_enable=YES write_enable=NO Start the vsftpd service, and ensure the service starts on boot: # systemctl start vsftpd.service # systemctl enable vsftpd.service Create a firewall rule to allow FTP service and reload the firewalld service to apply changes: # firewall-cmd --permanent --add-service=ftp # firewall-cmd --reload Red Hat Enterprise Linux 8 enforces SELinux by default, so configure SELinux to allow FTP access: # setsebool -P allow_ftpd_full_access=1 Create a sub-directory inside the /var/ftp/pub/ directory, where the downloaded packages are made available: # mkdir /var/ftp/pub/rhvrepo Download packages from all configured software repositories to the rhvrepo directory. This includes repositories for all Content Delivery Network subscription pools attached to the system, and any locally configured repositories: # reposync -p /var/ftp/pub/rhvrepo --download-metadata This command downloads a large number of packages and their metadata, and takes a long time to complete. Create a repository file, and copy it to the /etc/yum.repos.d/ directory on the intended Manager machine. You can create the configuration file manually or with a script. Run the script below on the machine hosting the repository, replacing ADDRESS in the baseurl with the IP address or FQDN of the machine hosting the repository: #!/bin/sh REPOFILE="/etc/yum.repos.d/rhev.repo" echo -e " " > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e "[USD(basename USDDIR)]" >> USDREPOFILE echo -e "name=USD(basename USDDIR)" >> USDREPOFILE echo -e "baseurl=ftp://__ADDRESS__/pub/rhvrepo/`basename USDDIR`" >> USDREPOFILE echo -e "enabled=1" >> USDREPOFILE echo -e "gpgcheck=0" >> USDREPOFILE echo -e "\n" >> USDREPOFILE done Return to Configuring the Manager . Packages are installed from the local repository, instead of from the Content Delivery Network. Troubleshooting When running reposync , the following error message appears No available modular metadata for modular package "package_name_from_module" it cannot be installed on the system Solution Ensure you have yum-utils-4.0.8-3.el8.noarch or higher installed so reposync correctly downloads all the packages. For more information, see Create a local repo with Red Hat Enterprise Linux 8 . | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable pki-deps",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest",
"dnf install vsftpd",
"anonymous_enable=YES write_enable=NO",
"systemctl start vsftpd.service systemctl enable vsftpd.service",
"firewall-cmd --permanent --add-service=ftp firewall-cmd --reload",
"setsebool -P allow_ftpd_full_access=1",
"mkdir /var/ftp/pub/rhvrepo",
"reposync -p /var/ftp/pub/rhvrepo --download-metadata",
"#!/bin/sh REPOFILE=\"/etc/yum.repos.d/rhev.repo\" echo -e \" \" > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e \"[USD(basename USDDIR)]\" >> USDREPOFILE echo -e \"name=USD(basename USDDIR)\" >> USDREPOFILE echo -e \"baseurl=ftp://__ADDRESS__/pub/rhvrepo/`basename USDDIR`\" >> USDREPOFILE echo -e \"enabled=1\" >> USDREPOFILE echo -e \"gpgcheck=0\" >> USDREPOFILE echo -e \"\\n\" >> USDREPOFILE done"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Configuring_an_Offline_Repository_for_Red_Hat_Virtualization_Manager_Installation_SM_localDB_deploy |
Configuring AMQ Broker | Configuring AMQ Broker Red Hat AMQ 2021.Q3 For Use with AMQ Broker 7.9 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/index |
5.133. kdelibs | 5.133. kdelibs 5.133.1. RHSA-2012:1416 - Critical: kdelibs security update Updated kdelibs packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The kdelibs packages provide libraries for the K Desktop Environment (KDE). Konqueror is a web browser. Security Fixes CVE-2012-4512 A heap-based buffer overflow flaw was found in the way the CSS (Cascading Style Sheets) parser in kdelibs parsed the location of the source for font faces. A web page containing malicious content could cause an application using kdelibs (such as Konqueror) to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2012-4513 A heap-based buffer over-read flaw was found in the way kdelibs calculated canvas dimensions for large images. A web page containing malicious content could cause an application using kdelibs to crash or disclose portions of its memory. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The desktop must be restarted (log out, then log back in) for this update to take effect. 5.133.2. RHBA-2012:1251 - kdelibs bug fix update Updated kdelibs packages that fix various bugs are now available for Red Hat Enterprise Linux 6. The kdelibs packages provide libraries for the K Desktop Environment (KDE). Bug Fixes BZ# 587016 Prior to this update, the KDE Print dialog did not remember settings, nor did it allow the user to save the settings. Consequent to this, when printing several documents, users were forced to manually change settings for each printed document. With this update, the KDE Print dialog retains settings as expected. BZ# 682611 When the system was configured to use the Traditional Chinese language (the zh_TW locale), Konqueror incorrectly used a Chinese (zh_CN) version of its splash page. This update ensures that Konqueror uses the correct locale. BZ# 734734 Previously, clicking the system tray to display hidden icons could cause the Plasma Workspaces to consume an excessive amount of CPU time. This update applies a patch that fixes this error. BZ# 754161 When using Konqueror to recursively copy files and directories, if one of the subdirectories was not accessible, no warning or error message was reported to the user. This update ensures that Konqueror displays a proper warning message in this scenario. BZ# 826114 Prior to this update, an attempt to add "Terminal Emulator" to the Main Toolbar caused Konqueror to terminate unexpectedly with a segmentation fault. With this update, the underlying source code has been corrected to prevent this error so that users can now use this functionality as expected. All users of kdelibs are advised to upgrade to these updated packages, which fix these bugs. 5.133.3. RHSA-2012:1418 - Critical: kdelibs security update Updated kdelibs packages that fix two security issues are now available for Red Hat Enterprise Linux 6 FasTrack. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The kdelibs packages provide libraries for the K Desktop Environment (KDE). Konqueror is a web browser. Security Fixes CVE-2012-4512 A heap-based buffer overflow flaw was found in the way the CSS (Cascading Style Sheets) parser in kdelibs parsed the location of the source for font faces. A web page containing malicious content could cause an application using kdelibs (such as Konqueror) to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2012-4513 A heap-based buffer over-read flaw was found in the way kdelibs calculated canvas dimensions for large images. A web page containing malicious content could cause an application using kdelibs to crash or disclose portions of its memory. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The desktop must be restarted (log out, then log back in) for this update to take effect. 5.133.4. RHBA-2012:0377 - kdelibs bug fix update Updated kdelibs packages that fix one bug are now available for Red Hat Enterprise Linux 6. The kdelibs packages provide libraries for K Desktop Environment (KDE). Bug Fix BZ# 698286 Previously, on big-endian architectures, including IBM System z, the Konqueror web browser could terminate unexpectedly or become unresponsive when loading certain web sites. A patch has been applied to address this issue, and Konqueror no longer crashes or hangs on the aforementioned architectures. All users of kdelibs are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/kdelibs |
8.4. Common NFS Mount Options | 8.4. Common NFS Mount Options Beyond mounting a file system with NFS on a remote host, it is also possible to specify other options at mount time to make the mounted share easier to use. These options can be used with manual mount commands, /etc/fstab settings, and autofs . The following are options commonly used for NFS mounts: lookupcache= mode Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all , none , or pos / positive . nfsvers= version Specifies which version of the NFS protocol to use, where version is 3 or 4. This is useful for hosts that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by the kernel and mount command. The option vers is identical to nfsvers , and is included in this release for compatibility reasons. noacl Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible with older systems. nolock Disables file locking. This setting is sometimes required when connecting to very old NFS servers. noexec Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries. nosuid Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. port= num Specifies the numeric value of the NFS server port. If num is 0 (the default value), then mount queries the remote host's rpcbind service for the port number to use. If the remote host's NFS daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is used instead. rsize= num and wsize= num These options set the maximum number of bytes to be transfered in a single NFS read or write operation. There is no fixed default value for rsize and wsize . By default, NFS uses the largest possible value that both the server and the client support. In Red Hat Enterprise Linux 7, the client and server maximum is 1,048,576 bytes. For more details, see the What are the default and maximum values for rsize and wsize with NFS mounts? KBase article. sec= flavors Security flavors to use for accessing files on the mounted export. The flavors value is a colon-separated list of one or more security flavors. By default, the client attempts to find a security flavor that both the client and the server support. If the server does not support any of the selected flavors, the mount operation fails. sec=sys uses local UNIX UIDs and GIDs. These use AUTH_SYS to authenticate NFS operations. sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users. sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering. sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead. tcp Instructs the NFS mount to use the TCP protocol. udp Instructs the NFS mount to use the UDP protocol. For more information, see man mount and man nfs . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/s1-nfs-client-config-options |
6.5. Connecting to Virtual Machines | 6.5. Connecting to Virtual Machines After creating a virtual machine, you can connect to its started guest OS. To do so, you can use: virt-viewer or remote-viewer - For details, see Graphical user interface tools for guest virtual machine management . virt-manager - For details, see Managing guests with the Virtual Machine Manager . The guest's serial console - For details, see Connecting the serial console for the Guest Virtual Machine . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/Connecting-to-vms |
Chapter 9. ImageStreamTag [image.openshift.io/v1] | Chapter 9. ImageStreamTag [image.openshift.io/v1] Description ImageStreamTag represents an Image that is retrieved by tag name from an ImageStream. Use this resource to interact with the tags and images in an image stream by tag, or to see the image details for a particular tag. The image associated with this resource is the most recently successfully tagged, imported, or pushed image (as described in the image stream status.tags.items list for this tag). If an import is in progress or has failed the image will be shown. Deleting an image stream tag clears both the status and spec fields of an image stream. If no image can be retrieved for a given tag, a not found error will be returned. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required tag generation lookupPolicy image 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array conditions is an array of conditions that apply to the image stream tag. conditions[] object TagEventCondition contains condition information for a tag event. generation integer generation is the current generation of the tagged image - if tag is provided and this value is not equal to the tag generation, a user has requested an import that has not completed, or conditions will be filled out indicating any error. image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lookupPolicy object ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata tag object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. 9.1.1. .conditions Description conditions is an array of conditions that apply to the image stream tag. Type array 9.1.2. .conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 9.1.3. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 9.1.4. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 9.1.5. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 9.1.6. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 9.1.7. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 9.1.8. .image.signatures Description Signatures holds all signatures of the image. Type array 9.1.9. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 9.1.10. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 9.1.11. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 9.1.12. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 9.1.13. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 9.1.14. .lookupPolicy Description ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. Type object Required local Property Type Description local boolean local will change the docker short image references (like "mysql" or "php:latest") on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The tag's referencePolicy is taken into account on the replaced value. Only works within the current namespace. 9.1.15. .tag Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 9.1.16. .tag.importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 9.1.17. .tag.referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 9.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagestreamtags GET : list objects of kind ImageStreamTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags GET : list objects of kind ImageStreamTag POST : create an ImageStreamTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags/{name} DELETE : delete an ImageStreamTag GET : read the specified ImageStreamTag PATCH : partially update the specified ImageStreamTag PUT : replace the specified ImageStreamTag 9.2.1. /apis/image.openshift.io/v1/imagestreamtags HTTP method GET Description list objects of kind ImageStreamTag Table 9.1. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTagList schema 401 - Unauthorized Empty 9.2.2. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags HTTP method GET Description list objects of kind ImageStreamTag Table 9.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTagList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageStreamTag Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body ImageStreamTag schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 202 - Accepted ImageStreamTag schema 401 - Unauthorized Empty 9.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the ImageStreamTag HTTP method DELETE Description delete an ImageStreamTag Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status_v5 schema 202 - Accepted Status_v5 schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageStreamTag Table 9.9. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageStreamTag Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageStreamTag Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body ImageStreamTag schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/image_apis/imagestreamtag-image-openshift-io-v1 |
Chapter 247. Olingo4 Component | Chapter 247. Olingo4 Component Available as of Camel version 2.19 The Olingo4 component utilizes Apache Olingo version 4.0 APIs to interact with OData 4.0 compliant service. Since version 4.0 OData is OASIS standard and number of popular open source and commercial vendors and products support this protocol. A sample list of supporting products can be found on the OData website . The Olingo4 component supports reading entity sets, entities, simple and complex properties, counts, using custom and OData system query parameters. It supports updating entities and properties. It also supports submitting queries and change requests as a single OData batch operation. The component supports configuring HTTP connection parameters and headers for OData service connection. This allows configuring use of SSL, OAuth2.0, etc. as required by the target OData service. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-olingo4</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 247.1. URI format olingo4://endpoint/<resource-path>?[options] 247.2. Olingo4 Options The Olingo4 component supports 3 options, which are listed below. Name Description Default Type configuration (common) To use the shared configuration Olingo4Configuration useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Olingo4 endpoint is configured using URI syntax: with the following path and query parameters: 247.2.1. Path Parameters (2 parameters): Name Description Default Type apiName Required What kind of operation to perform Olingo4ApiName methodName Required What sub operation to use for the selected operation String 247.2.2. Query Parameters (14 parameters): Name Description Default Type connectTimeout (common) HTTP connection creation timeout in milliseconds, defaults to 30,000 (30 seconds) 30000 int contentType (common) Content-Type header value can be used to specify JSON or XML message format, defaults to application/json;charset=utf-8 application/json;charset=utf-8 String httpAsyncClientBuilder (common) Custom HTTP async client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely HttpAsyncClientBuilder httpClientBuilder (common) Custom HTTP client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely HttpClientBuilder httpHeaders (common) Custom HTTP headers to inject into every request, this could include OAuth tokens, etc. Map inBody (common) Sets the name of a parameter to be passed in the exchange In Body String proxy (common) HTTP proxy server configuration HttpHost serviceUri (common) Target OData service base URI, e.g. http://services.odata.org/OData/OData.svc String socketTimeout (common) HTTP request timeout in milliseconds, defaults to 30,000 (30 seconds) 30000 int sslContextParameters (common) To configure security using SSLContextParameters SSLContextParameters bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 247.3. Spring Boot Auto-Configuration The component supports 14 options, which are listed below. Name Description Default Type camel.component.olingo4.configuration.api-name What kind of operation to perform Olingo4ApiName camel.component.olingo4.configuration.connect-timeout HTTP connection creation timeout in milliseconds, defaults to 30,000 (30 seconds) 30000 Integer camel.component.olingo4.configuration.content-type Content-Type header value can be used to specify JSON or XML message format, defaults to application/json;charset=utf-8 application/json;charset=utf-8 String camel.component.olingo4.configuration.http-async-client-builder Custom HTTP async client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely HttpAsyncClientBuilder camel.component.olingo4.configuration.http-client-builder Custom HTTP client builder for more complex HTTP client configuration, overrides connectionTimeout, socketTimeout, proxy and sslContext. Note that a socketTimeout MUST be specified in the builder, otherwise OData requests could block indefinitely HttpClientBuilder camel.component.olingo4.configuration.http-headers Custom HTTP headers to inject into every request, this could include OAuth tokens, etc. Map camel.component.olingo4.configuration.method-name What sub operation to use for the selected operation String camel.component.olingo4.configuration.proxy HTTP proxy server configuration HttpHost camel.component.olingo4.configuration.service-uri Target OData service base URI, e.g. http://services.odata.org/OData/OData.svc String camel.component.olingo4.configuration.socket-timeout HTTP request timeout in milliseconds, defaults to 30,000 (30 seconds) 30000 Integer camel.component.olingo4.configuration.ssl-context-parameters To configure security using SSLContextParameters SSLContextParameters camel.component.olingo4.enabled Enable olingo4 component true Boolean camel.component.olingo4.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.olingo4.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 247.4. Producer Endpoints Producer endpoints can use endpoint names and options listed . Producer endpoints can also use a special option inBody that in turn should contain the name of the endpoint option whose value will be contained in the Camel Exchange In message. The inBody option defaults to data for endpoints that take that option. Any of the endpoint options can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelOlingo4.<option> . Note that the inBody option overrides message header, i.e. the endpoint option inBody=option would override a CamelOlingo4.option header. In addition, query parameters can be specified Note that the resourcePath option can either in specified in the URI as a part of the URI path, as an endpoint option ?resourcePath=<resource-path> or as a header value CamelOlingo4.resourcePath. The OData entity key predicate can either be a part of the resource path, e.g. Manufacturers('1') , where '__1' is the key predicate, or be specified separately with resource path Manufacturers and keyPredicate option '1' . Endpoint Options HTTP Method Result Body Type batch data, endpointHttpHeaders POST with multipart/mixed batch request java.util.List<org.apache.camel.component.olingo4.api.batch.Olingo4BatchResponse> create data, resourcePath, endpointHttpHeaders POST org.apache.olingo.client.api.domain.ClientEntity for new entries org.apache.olingo.commons.api.http.HttpStatusCode for other OData resources delete resourcePath, endpointHttpHeaders DELETE org.apache.olingo.commons.api.http.HttpStatusCode merge data, resourcePath, endpointHttpHeaders MERGE org.apache.olingo.commons.api.http.HttpStatusCode patch data, resourcePath, endpointHttpHeaders PATCH org.apache.olingo.commons.api.http.HttpStatusCode read queryParams, resourcePath, endpointHttpHeaders GET Depends on OData resource being queried as described update data, resourcePath, endpointHttpHeaders PUT org.apache.olingo.commons.api.http.HttpStatusCode 247.5. Endpoint HTTP Headers (since Camel 2.20 ) The component level configuration property httpHeaders supplies static HTTP header information. However, some systems requires dynamic header information to be passed to and received from the endpoint. A sample use case would be systems that require dynamic security tokens. The endpointHttpHeaders and responseHttpHeaders endpoint properties provides this capability. Set headers that need to be passed to the endpoint in the CamelOlingo4.endpointHttpHeaders property and the response headers will be returned in a CamelOlingo4.responseHttpHeaders property. Both properties are of the type java.util.Map<String, String> . 247.6. OData Resource Type Mapping The result of read endpoint and data type of data option depends on the OData resource being queried, created or modified. OData Resource Type Resource URI from resourcePath and keyPredicate In or Out Body Type Entity data model USDmetadata org.apache.olingo.commons.api.edm.Edm Service document / org.apache.olingo.client.api.domain.ClientServiceDocument OData entity set <entity-set> org.apache.olingo.client.api.domain.ClientEntitySet OData entity <entity-set>(<key-predicate>) org.apache.olingo.client.api.domain.ClientEntity for Out body (response) java.util.Map<String, Object> for In body (request) Simple property <entity-set>(<key-predicate>)/<simple-property> org.apache.olingo.client.api.domain.ClientPrimitiveValue Simple property value <entity-set>(<key-predicate>)/<simple-property>/USDvalue org.apache.olingo.client.api.domain.ClientPrimitiveValue Complex property <entity-set>(<key-predicate>)/<complex-property> org.apache.olingo.client.api.domain.ClientComplexValue Count <resource-uri>/USDcount java.lang.Long 247.7. Consumer Endpoints Only the read endpoint can be used as a consumer endpoint. Consumer endpoints can use Scheduled Poll Consumer Options with a consumer. prefix to schedule endpoint invocation. By default consumer endpoints that return an array or collection will generate one exchange per element, and their routes will be executed once for each exchange. This behavior can be disabled by setting the endpoint property consumer.splitResult=false . 247.8. Message Headers Any URI option can be provided in a message header for producer endpoints with a CamelOlingo4. prefix. 247.9. Message Body All result message bodies utilize objects provided by the underlying Apache Olingo 4.0 API used by the Olingo4Component. Producer endpoints can specify the option name for incoming message body in the inBody endpoint URI parameter. For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages, unless consumer.splitResult is set to false . 247.10. Use cases The following route reads top 5 entries from the People entity ordered by ascending FirstName property. from("direct:...") .setHeader("CamelOlingo4.USDtop", "5"); .to("olingo4://read/People?orderBy=FirstName%20asc"); The following route reads Airports entity using the key property value in incoming id header. from("direct:...") .setHeader("CamelOlingo4.keyPredicate", header("id")) .to("olingo4://read/Airports"); The following route creates People entity using the ClientEntity in body message. from("direct:...") .to("olingo4://create/People"); | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-olingo4</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"olingo4://endpoint/<resource-path>?[options]",
"olingo4:apiName/methodName",
"from(\"direct:...\") .setHeader(\"CamelOlingo4.USDtop\", \"5\"); .to(\"olingo4://read/People?orderBy=FirstName%20asc\");",
"from(\"direct:...\") .setHeader(\"CamelOlingo4.keyPredicate\", header(\"id\")) .to(\"olingo4://read/Airports\");",
"from(\"direct:...\") .to(\"olingo4://create/People\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/olingo4-component |
Chapter 7. Creating infrastructure machine sets | Chapter 7. Creating infrastructure machine sets Important This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. 7.1. OpenShift Container Platform infrastructure components The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions: Kubernetes and OpenShift Container Platform control plane services that run on masters The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Service brokers Red Hat Quay Red Hat OpenShift Container Storage Red Hat Advanced Cluster Manager Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Any node that runs any other container, pod, or component is a worker node that your subscription must cover. Additional resources For information on infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. 7.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 7.2.1. Creating machine sets for different clouds Use the sample machine set for your cloud. 7.2.1.1. Sample YAML for a machine set custom resource on AWS This sample YAML defines a machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a region: us-east-1 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-us-east-1a 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: worker-user-data 1 3 5 12 13 14 15 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, <infra> node label, and zone. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. 11 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-worker-<zone> Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 7.2.1.2. Sample YAML for a machine set custom resource on Azure This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21 1 5 7 13 15 16 17 20 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 18 19 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify a taint to prevent user workloads from being scheduled on infra nodes. 14 Specify the region to place machines on. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. 7.2.1.3. Sample YAML for a machine set custom resource on GCP This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 8 spec: metadata: labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 11 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 12 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network 13 subnetwork: <infrastructure_id>-worker-subnet 14 projectID: <project_name> 15 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com 16 17 scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker 18 userDataSecret: name: worker-user-data zone: us-central1-a 1 2 3 4 5 8 13 14 16 18 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. 11 Specify the path to the image that is used in current machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a 12 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 15 17 Specify the name of the GCP project that you use for your cluster. Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 7.2.1.4. Sample YAML for a machine set custom resource on RHOSP This sample YAML defines a machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 7.2.1.5. Sample YAML for a machine set custom resource on RHV This sample YAML defines a machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 userDataSecret: name: worker-user-data 1 7 9 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 10 11 13 Specify the node label to add. 4 8 12 Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. 5 Specify the number of machines to create. 6 Selector for the machines. 14 Specify the UUID for the RHV cluster to which this VM instance belongs. 15 Specify the RHV VM template to use to create the machine. 16 Optional: Specify the VM instance type. Warning The instance_type_id field is deprecated and will be removed in a future release. If you include this parameter, you do not need to specify the hardware parameters of the VM including CPU and memory because this parameter overrides all hardware parameters. 17 Optional: The CPU field contains the CPU's configuration, including sockets, cores, and threads. 18 Optional: Specify the number of sockets for a VM. 19 Optional: Specify the number of cores per socket. 20 Optional: Specify the number of threads per core. 21 Optional: Specify the size of a VM's memory in MiB. 22 Optional: Root disk of the node. 23 Optional: Specify the size of the bootable disk in GiB. 24 Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones. 25 Optional: Specify the vNIC profile ID. 26 Specify the name of the secret that holds the RHV credentials. 27 Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM, but there are limitations. For example, you cannot access the VM with a graphical console. For more information see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . Note Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template. 7.2.1.6. Sample YAML for a machine set custom resource on vSphere This sample YAML defines a machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. 11 Specify the vSphere VM network to deploy the machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter Datacenter to deploy the machine set on. 14 Specify the vCenter Datastore to deploy the machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 7.2.2. Creating a machine set In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. If you are not sure which value to set for a specific field, you can check an existing machine set from your cluster: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m Check values of a specific machine set: USD oc get machineset <machineset_name> -n \ openshift-machine-api -o yaml Example output ... template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a 1 The cluster ID. 2 A default node label. Create the new MachineSet CR: USD oc create -f <file_name>.yaml View the list of machine sets: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 7.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes (also known as the master nodes) are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1 ... 1 This example node selector deploys pods on nodes in the us-east-1 region by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 7.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 7.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 7.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. 7.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created. 7.4.1. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator Add the nodeSelector stanza that references the infra label to the spec section, as shown: spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0 Because the role list includes infra , the pod is running on the correct node. 7.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster Modify the spec section of the object to resemble the following YAML: spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: node-role.kubernetes.io/infra: "" Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 7.4.3. Moving the monitoring solution By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map. Procedure Save the following ConfigMap definition as the cluster-monitoring-configmap.yaml file: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" grafana: nodeSelector: node-role.kubernetes.io/infra: "" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes. Apply the new config map: USD oc create -f cluster-monitoring-configmap.yaml Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 7.4.4. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for OpenShift Logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites OpenShift Logging and Elasticsearch must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana Pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s Additional resources See the monitoring documentation for the general instructions on moving OpenShift Container Platform components. | [
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a region: us-east-1 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-us-east-1a 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-worker-<zone>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 11 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 12 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network 13 subnetwork: <infrastructure_id>-worker-subnet 14 projectID: <project_name> 15 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com 16 17 scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker 18 userDataSecret: name: worker-user-data zone: us-central1-a",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule",
"tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\"",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: node-role.kubernetes.io/infra: \"\"",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"oc create -f cluster-monitoring-configmap.yaml",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/machine_management/creating-infrastructure-machinesets |
20.17. Guest Virtual Machine Retrieval Commands | 20.17. Guest Virtual Machine Retrieval Commands 20.17.1. Displaying the Host Physical Machine Name The virsh domhostname domain command displays the specified guest virtual machine's physical host name provided the hypervisor can publish it. Example 20.39. How to display the host physical machine name The following example displays the host physical machine name for the guest1 virtual machine, if the hypervisor makes it available: # virsh domhostname guest1 20.17.2. Displaying General Information about a Virtual Machine The virsh dominfo domain command displays basic information about a specified guest virtual machine. This command may also be used with the option [--domain] guestname . Example 20.40. How to display general information about the guest virtual machine The following example displays general information about the guest virtual machine named guest1 : 20.17.3. Displaying a Virtual Machine's ID Number Although virsh list includes the ID in its output, the virsh domid domain>|<ID displays the ID for the guest virtual machine, provided it is running. An ID will change each time you run the virtual machine. If guest virtual machine is shut off, the machine name will be displayed as a series of dashes ('-----'). This command may also be used with the [--domain guestname ] option. Example 20.41. How to display a virtual machine's ID number In order to run this command and receive any usable output, the virtual machine should be running. The following example produces the ID number of the guest1 virtual machine: 20.17.4. Aborting Running Jobs on a Guest Virtual Machine The virsh domjobabort domain command aborts the currently running job on the specified guest virtual machine. This command may also be used with the [--domain guestname ] option. Example 20.42. How to abort a running job on a guest virtual machine In this example, there is a job running on the guest1 virtual machine that you want to abort. When running the command, change guest1 to the name of your virtual machine: # virsh domjobabort guest1 20.17.5. Displaying Information about Jobs Running on the Guest Virtual Machine The virsh domjobinfo domain command displays information about jobs running on the specified guest virtual machine, including migration statistics. This command may also be used with the [--domain guestname ] option, or with the --completed option to return information on the statistics of a recently completed job. Example 20.43. How to display statistical feedback The following example lists statistical information about the guest1 virtual machine: 20.17.6. Displaying the Guest Virtual Machine's Name The virsh domname domainID command displays the name guest virtual machine name, given its ID or UUID. Although the virsh list --all command will also display the guest virtual machine's name, this command only lists the guest's name. Example 20.44. How to display the name of the guest virtual machine The following example displays the name of the guest virtual machine with domain ID 8 : 20.17.7. Displaying the Virtual Machine's State The virsh domstate domain command displays the state of the given guest virtual machine. Using the --reason argument will also display the reason for the displayed state. This command may also be used with the [--domain guestname ] option, as well as the --reason option, which displays the reason for the state. If the command reveals an error, you should run the command virsh domblkerror . See Section 20.12.7, "Displaying Errors on Block Devices" for more details. Example 20.45. How to display the guest virtual machine's current state The following example displays the current state of the guest1 virtual machine: 20.17.8. Displaying the Connection State to the Virtual Machine virsh domcontrol domain displays the state of an interface to the hypervisor that is used to control a specified guest virtual machine. For states that are not OK or Error, it will also print the number of seconds that have elapsed since the control interface entered the displayed state. Example 20.46. How to display the guest virtual machine's interface state The following example displays the current state of the guest1 virtual machine's interface. | [
"virsh dominfo guest1 Id: 8 Name: guest1 UUID: 90e0d63e-d5c1-4735-91f6-20a32ca22c40 OS Type: hvm State: running CPU(s): 1 CPU time: 271.9s Max memory: 1048576 KiB Used memory: 1048576 KiB Persistent: yes Autostart: disable Managed save: no Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c422,c469 (enforcing)",
"virsh domid guest1 8",
"virsh domjobinfo guest1 Job type: Unbounded Time elapsed: 1603 ms Data processed: 47.004 MiB Data remaining: 658.633 MiB Data total: 1.125 GiB Memory processed: 47.004 MiB Memory remaining: 658.633 MiB Memory total: 1.125 GiB Constant pages: 114382 Normal pages: 12005 Normal data: 46.895 MiB Expected downtime: 0 ms Compression cache: 64.000 MiB Compressed data: 0.000 B Compressed pages: 0 Compression cache misses: 12005 Compression overflows: 0",
"virsh domname 8 guest1",
"virsh domstate guest1 running",
"virsh domcontrol guest1 ok"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-domain_retrieval_commands |
Chapter 2. Initial LVS Configuration | Chapter 2. Initial LVS Configuration After installing Red Hat Enterprise Linux, you must take some basic steps to set up both the LVS routers and the real servers. This chapter covers these initial steps in detail. Note The LVS router node that becomes the active node once LVS is started is also referred to as the primary node . When configuring LVS, use the Piranha Configuration Tool on the primary node. 2.1. Configuring Services on the LVS Routers The Red Hat Enterprise Linux installation program installs all of the components needed to set up LVS, but the appropriate services must be activated before configuring LVS. For both LVS routers, set the appropriate services to start at boot time. There are three primary tools available for setting services to activate at boot time under Red Hat Enterprise Linux: the command line program chkconfig , the ncurses-based program ntsysv , and the graphical Services Configuration Tool . All of these tools require root access. Note To attain root access, open a shell prompt and use the su - command followed by the root password. For example: On the LVS routers, there are three services which need to be set to activate at boot time: The piranha-gui service (primary node only) The pulse service The sshd service If you are clustering multi-port services or using firewall marks, you must also enable the iptables service. It is best to set these services to activate in both runlevel 3 and runlevel 5. To accomplish this using chkconfig , type the following command for each service: /sbin/chkconfig --level 35 daemon on In the above command, replace daemon with the name of the service you are activating. To get a list of services on the system as well as what runlevel they are set to activate on, issue the following command: /sbin/chkconfig --list Warning Turning any of the above services on using chkconfig does not actually start the daemon. To do this use the /sbin/service command. See Section 2.3, "Starting the Piranha Configuration Tool Service" for an example of how to use the /sbin/service command. For more information on runlevels and configuring services with ntsysv and the Services Configuration Tool , refer to the chapter titled "Controlling Access to Services" in the Red Hat Enterprise Linux System Administration Guide . | [
"su - root password"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/ch-initial-setup-vsa |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_google_cloud/providing-feedback-on-red-hat-documentation_gcp |
Chapter 23. Installing on any platform | Chapter 23. Installing on any platform 23.1. Installing a cluster on any platform In OpenShift Container Platform version 4.11, you can install a cluster on any infrastructure that you provision, including virtualization and cloud environments. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 23.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 23.1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 23.1.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 23.1.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 23.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 23.1.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 23.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 23.1.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 23.1.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 23.1.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 23.1.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 23.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 23.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 23.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 23.1.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 23.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 23.1.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 23.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 23.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 23.1.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 23.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 23.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 23.1.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 23.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 23.1.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 23.1.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 23.1.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 23.1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 23.1.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 23.1.9. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 23.1.9.1. Sample install-config.yaml file for other platforms You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{"auths": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 12 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 14 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23.1.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 23.1.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 23.1.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 23.1.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. Note As of OpenShift Container Platform 4.6, the RHCOS ISO and other installation artifacts provide support for installation on disks with 4K sectors. 23.1.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.11-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 23.1.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.11-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.11-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.11-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 23.1.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 23.1.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/sda Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 23.1.11.3.2. Disk partitioning The disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless the default partitioning configuration is overridden. During the RHCOS installation, the size of the root file system is increased to use the remaining available space on the target device. There are two cases where you might want to override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node: Creating separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for mounting /var or a subdirectory of /var , such as /var/lib/etcd , on a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retaining existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Warning The use of custom partitions could result in those partitions not being monitored by OpenShift Container Platform or alerted on. If you are overriding the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. 23.1.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 23.1.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/sda The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/sda This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 23.1.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 23.1.11.3.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 23.1.11.3.4.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 23.1.11.3.4.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 23.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 23.1.11.3.4.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 23.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 23.1.11.4. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output for x86_64 Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version Example output for aarch64 Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 23.1.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 23.1.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 23.1.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 23.1.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Configure the Operators that are not available. 23.1.15.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 23.1.15.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . 23.1.15.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 23.1.15.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.11 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 23.1.15.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 23.1.15.3.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 23.1.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 23.1.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 23.1.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 2 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 3 bind *:22623 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 4 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.11-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.11-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.11-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.11-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.11-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.11-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.11/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.11 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.0 True False False 19m baremetal 4.11.0 True False False 37m cloud-credential 4.11.0 True False False 40m cluster-autoscaler 4.11.0 True False False 37m config-operator 4.11.0 True False False 38m console 4.11.0 True False False 26m csi-snapshot-controller 4.11.0 True False False 37m dns 4.11.0 True False False 37m etcd 4.11.0 True False False 36m image-registry 4.11.0 True False False 31m ingress 4.11.0 True False False 30m insights 4.11.0 True False False 31m kube-apiserver 4.11.0 True False False 26m kube-controller-manager 4.11.0 True False False 36m kube-scheduler 4.11.0 True False False 36m kube-storage-version-migrator 4.11.0 True False False 37m machine-api 4.11.0 True False False 29m machine-approver 4.11.0 True False False 37m machine-config 4.11.0 True False False 36m marketplace 4.11.0 True False False 37m monitoring 4.11.0 True False False 29m network 4.11.0 True False False 38m node-tuning 4.11.0 True False False 37m openshift-apiserver 4.11.0 True False False 32m openshift-controller-manager 4.11.0 True False False 30m openshift-samples 4.11.0 True False False 32m operator-lifecycle-manager 4.11.0 True False False 37m operator-lifecycle-manager-catalog 4.11.0 True False False 37m operator-lifecycle-manager-packageserver 4.11.0 True False False 32m service-ca 4.11.0 True False False 38m storage 4.11.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-on-any-platform |
Operators | Operators OpenShift Container Platform 4.13 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operators/index |
4.2. Configuring Virtual Machines on Red Hat Gluster Storage volumes using the Red Hat Virtualization Manager | 4.2. Configuring Virtual Machines on Red Hat Gluster Storage volumes using the Red Hat Virtualization Manager The following procedure describes how to add a Red Hat Gluster Storage server for virtualization using Red Hat Virtualization Manager. Note It is recommended that you use a separate data center for Red Hat Gluster Storage nodes. Procedure 4.2. To Add a Red Hat Gluster Storage Server for Virtualization Using Red Hat Virtualization Manager Create a data center: Select the Data Centers resource tab to list all data centers. Click New to open the New Data Center window. Figure 4.1. New Data Center Window Enter the Name and Description of the data center. Select the storage Type as Shared from the drop-down menu. Select the Quota Mode as Disabled . Click OK . The new data center is Uninitialized until you configure the cluster, host, and storage settings. Create a cluster: Select the Clusters resource tab to list all clusters. Click New to open the New Cluster window. Figure 4.2. New Cluster Window Select a Data Center for the cluster from the drop-down menu. Enter a Name and Description for the cluster. Select the CPU Name and Compatibility Version from the drop-down menus. Check Enable Virt Service . Click OK . Add hosts: Select the Hosts resource tab to view a list of all hosts in the system. Click New to open the New Host window. Figure 4.3. New Host Window Important A Red Hat Enterprise Linux hypervisor and Red Hat Virtualization hypervisor on a single VDSM cluster accessing the same virtual machine image store is not supported. Select the Data Center and Host Cluster for the new host from the drop-down menus. Enter the Name , Address , and Root Password of the new hypervisor host. Check Automatically configure host firewall if required. Click OK . The new host appears in the list of hypervisor hosts with the status Installing . After the host is activated, the status changes to Up automatically. Create and configure volumes on the Red Hat Gluster Storage cluster using the command line interface. For information on creating and configuring volumes, see Section 4.1, "Configuring Volumes Using the Command Line Interface" and Red Hat Gluster Storage Volumes in the Red Hat Gluster Storage Administration Guide : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-red_hat_storage_volumes . Add a storage domain using Red Hat Virtualization Manager: Select the Storage resource tab to list existing storage domains. Click New Domain to open the New Domain window. Figure 4.4. New Domain Window Enter a Name for the storage domain. Select a shared Data Center to associate with the storage domain. Set the Domain Function to Data and the Storage Type to GlusterFS . Select a host from the Host to Use drop-down menu. Check the Use managed gluster volume checkbox and select the appropriate volume from the Gluster dropdown menu. Note This dropdown menu is only populated with volumes whose nodes are managed by Red Hat Virtualization Manager. See Chapter 5, Managing Red Hat Gluster Storage Servers and Volumes using Red Hat Virtualization Manager for instructions on how to set up management of your Red Hat Gluster Storage nodes by Red Hat Virtualization Manager. Enter the applicable Red Hat Gluster Storage native client Mount Options . Enter multiple mount options separated by commas. For more information on native client mount options, see Creating Access to Volumes in the Red Hat Gluster Storage Administration Guide : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-accessing_data_-_setting_up_clients . Note that only the native client is supported when integrating Red Hat Gluster Storage and Red Hat Virtualization. Click OK . You can now create virtual machines using Red Hat Gluster Storage as a storage domain. For more information on creating virtual machines, see the Red Hat Virtualization Virtual Machine Management Guide : https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/virtual_machine_management_guide/ . Note To prevent the risk of split brain incidents on Red Hat Gluster Storage domains, the use of shareable disks on Red Hat Gluster Storage domains is disabled. Attempting to create a shareable disk brings up a warning in the administration portal which recommends the use of Quorum on the Red Hat Gluster Storage server to ensure data integrity. This policy is not enforced on Red Hat Gluster Storage domains created on a POSIX domain with GlusterFS specified as the virtual file system type. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/configuring_virtual_machines_on_red_hat_storage_volumes_using_the_red_hat_enterprise_virtualization_manager |
Chapter 9. Monitoring project and application metrics using the Developer perspective | Chapter 9. Monitoring project and application metrics using the Developer perspective The Observe view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. 9.1. Prerequisites You have created and deployed applications on OpenShift Dedicated . You have logged in to the web console and have switched to the Developer perspective. 9.2. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure Go to Observe to see the Dashboard , Metrics , Alerts , and Events for your project. Optional: Use the Dashboard tab to see graphs depicting the following application metrics: CPU usage Memory usage Bandwidth consumption Network-related information such as the rate of transmitted and received packets and the rate of dropped packets. In the Dashboard tab, you can access the Kubernetes compute resources dashboards. Note In the Dashboard list, the Kubernetes / Compute Resources / Namespace (Pods) dashboard is selected by default. Use the following options to see further details: Select a dashboard from the Dashboard list to see the filtered metrics. All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Select an option from the Time Range list to determine the time frame for the data being captured. Set a custom time range by selecting Custom time range from the Time Range list. You can input or select the From and To dates and times. Click Save to save the custom time range. Select an option from the Refresh Interval list to determine the time period after which the data is refreshed. Hover your cursor over the graphs to see specific details for your pod. Click Inspect located in the upper-right corner of every graph to see any particular graph details. The graph details appear in the Metrics tab. Optional: Use the Metrics tab to query for the required project metric. Figure 9.1. Monitoring metrics In the Select Query list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. Click Show PromQL to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. Use the drop-down list to set a time range for the data being displayed. You can click Reset Zoom to reset it to the default time range. Optional: In the Select Query list, select Custom Query to create a custom Prometheus query and filter relevant metrics. Optional: Use the Alerts tab to do the following tasks: See the rules that trigger alerts for the applications in your project. Identify the alerts firing in the project. Silence such alerts if required. Figure 9.2. Monitoring alerts Use the following options to see further details: Use the Filter list to filter the alerts by their Alert State and Severity . Click on an alert to go to the details page for that alert. In the Alerts Details page, you can click View Metrics to see the metrics for the alert. Use the Notifications toggle adjoining an alert rule to silence all the alerts for that rule, and then select the duration for which the alerts will be silenced from the Silence for list. You must have the permissions to edit alerts to see the Notifications toggle. Use the Options menu adjoining an alert rule to see the details of the alerting rule. Optional: Use the Events tab to see the events for your project. Figure 9.3. Monitoring events You can filter the displayed events using the following options: In the Resources list, select a resource to see events for that resource. In the All Types list, select a type of event to see events relevant to that type. Search for specific events using the Filter events by names or messages field. 9.3. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the alerts and metrics for your application. Critical and warning alerts for your application are indicated on the workload node in the Topology view. Procedure To see the alerts for your workload: In the Topology view, click the workload to see the workload details in the right panel. Click the Observe tab to see the critical and warning alerts for the application; graphs for metrics, such as CPU, memory, and bandwidth usage; and all the events for the application. Note Only critical and warning alerts in the Firing state are displayed in the Topology view. Alerts in the Silenced , Pending and Not Firing states are not displayed. Figure 9.4. Monitoring application metrics Click the alert listed in the right panel to see the alert details in the Alert Details page. Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. Click View monitoring dashboard to see the monitoring dashboard for that application. 9.4. Image vulnerabilities breakdown In the Developer perspective, the project dashboard shows the Image Vulnerabilities link in the Status section. Using this link, you can view the Image Vulnerabilities breakdown window, which includes details regarding vulnerable container images and fixable container images. The icon color indicates severity: Red: High priority. Fix immediately. Orange: Medium priority. Can be fixed after high-priority vulnerabilities. Yellow: Low priority. Can be fixed after high and medium-priority vulnerabilities. Based on the severity level, you can prioritize vulnerabilities and fix them in an organized manner. Figure 9.5. Viewing image vulnerabilities 9.5. Monitoring your application and image vulnerabilities metrics After you create applications in your project and deploy them, use the Developer perspective in the web console to see the metrics for your application dependency vulnerabilities across your cluster. The metrics help you to analyze the following image vulnerabilities in detail: Total count of vulnerable images in a selected project Severity-based counts of all vulnerable images in a selected project Drilldown into severity to obtain the details, such as count of vulnerabilities, count of fixable vulnerabilities, and number of affected pods for each vulnerable image Prerequisites You have installed the Red Hat Quay Container Security operator from the Operator Hub. Note The Red Hat Quay Container Security operator detects vulnerabilities by scanning the images that are in the quay registry. Procedure For a general overview of the image vulnerabilities, on the navigation panel of the Developer perspective, click Project to see the project dashboard. Click Image Vulnerabilities in the Status section. The window that opens displays details such as Vulnerable Container Images and Fixable Container Images . For a detailed vulnerabilities overview, click the Vulnerabilities tab on the project dashboard. To get more detail about an image, click its name. View the default graph with all types of vulnerabilities in the Details tab. Optional: Click the toggle button to view a specific type of vulnerability. For example, click App dependency to see vulnerabilities specific to application dependency. Optional: You can filter the list of vulnerabilities based on their Severity and Type or sort them by Severity , Package , Type , Source , Current Version , and Fixed in Version . Click a Vulnerability to get its associated details: Base image vulnerabilities display information from a Red Hat Security Advisory (RHSA). App dependency vulnerabilities display information from the Snyk security application. 9.6. Additional resources Monitoring overview | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/odc-monitoring-project-and-application-metrics-using-developer-perspective |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_cross-site_replication/rhdg-docs_datagrid |
17.3. Network Address Translation | 17.3. Network Address Translation By default, virtual network switches operate in NAT mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected guests to use the host physical machine IP address for communication to any external network. By default, computers that are placed externally to the host physical machine cannot communicate to the guests inside when the virtual network switch is operating in NAT mode, as shown in the following diagram: Figure 17.3. Virtual network switch using NAT with two guests Warning Virtual network switches use NAT configured by iptables rules. Editing these rules while the switch is running is not recommended, as incorrect rules may result in the switch being unable to communicate. If the switch is not running, you can set the public IP range for forward mode NAT in order to create a port masquerading range by running: | [
"iptables -j SNAT --to-source [start]-[end]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-network_address_translation |
17.3. Linux RAID Subsystems | 17.3. Linux RAID Subsystems RAID in Linux is composed of the following subsystems: Linux Hardware RAID controller drivers Hardware RAID controllers have no specific RAID subsystem in Linux. Because they use special RAID chipsets, hardware RAID controllers come with their own drivers; these drivers allow the system to detect the RAID sets as regular disks. mdraid The mdraid subsystem was designed as a software RAID solution for Linux; it is also the preferred solution for software RAID under Linux. This subsystem uses its own metadata format, generally referred to as native mdraid metadata. mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility. dmraid Device-mapper RAID or dmraid refers to device-mapper kernel code that offers the mechanism to piece disks together into a RAID set. This same kernel code does not provide any RAID configuration mechanism. dmraid is configured entirely in user-space, making it easy to support various on-disk metadata formats. As such, dmraid is used on a wide variety of firmware RAID implementations. dmraid also supports Intel firmware RAID, although Red Hat Enterprise Linux 6 uses mdraid to access Intel firmware RAID sets. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/introduction_to_red_hat_jboss_enterprise_application_platform/proc_providing-feedback-on-red-hat-documentation_assembly-intro-eap |
Chapter 4. Installing a cluster quickly on Azure | Chapter 4. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.14, you can install a cluster on Microsoft Azure that uses the default configuration options. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. You have the application ID and password of a service principal. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Specify the following Azure parameter values for your subscription and service principal: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. azure service principal client id : Enter its application ID. azure service principal client secret : Enter its password. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installing-azure-default |
Chapter 2. OpenStack Integration Test Suite Tests | Chapter 2. OpenStack Integration Test Suite Tests The OpenStack Integration Test Suite has many applications. It acts as a gate for commits to the OpenStack core projects, it can stress test to generate load on a cloud deployment, and it can perform CLI tests to check the response formatting of the command line. However, the functionality that we are concerned with are the scenario tests and API tests . These tests are run against your OpenStack cloud deployment. The following sections contain information about implementing each of these tests. 2.1. Scenario Tests Scenario tests simulate a typical end user action workflow to test the integration points between services. The testing framework conducts the configuration, tests the integration between services, and is then removed automatically. Tag the tests with the services that they relate to, to make it clear which client libraries the test uses. A scenario is based on a use case, for example: Upload an image to the Image Service Deploy an instance from the image Attach a volume to the instance Create a snapshot of the instance Detach the volume from the instance 2.2. API Tests API tests validate the OpenStack API. Tests use the OpenStack Integration Test Suite implementation of the OpenStack API. You can use both valid and invalid JSON to ensure that error responses are valid. You can run tests independently and you do not have to rely on the test state. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/openstack_integration_test_suite_guide/chap-tempest-tests |
Chapter 12. Managing Hosts | Chapter 12. Managing Hosts Both DNS and Kerberos are configured as part of the initial client configuration. This is required because these are the two services that bring the machine within the IdM domain and allow it to identify the IdM server it will connect with. After the initial configuration, IdM has tools to manage both of these services in response to changes in the domain services, changes to the IT environment, or changes on the machines themselves which affect Kerberos, certificate, and DNS services. This chapter describes how to manage identity services that relate directly to the client machine: DNS entries and settings Machine authentication Host name changes (which affect domain services) 12.1. About Hosts, Services, and Machine Identity and Authentication The basic function of an enrollment process is to create a host entry for the client machine in the IdM directory. This host entry is used to establish relationships between other hosts and even services within the domain (as described in Chapter 1, Introduction to Red Hat Identity Management ). These relationships are part of delegating authorization and control to hosts within the domain. A host entry contains all of the information about the client within IdM: Service entries associated with the host The host and service principal Access control rules Machine information, such as its physical location and operating system Some services that run on a host can also belong to the IdM domain. Any service that can store a Kerberos principal or an SSL certificate (or both) can be configured as an IdM service. Adding a service to the IdM domain allows the service to request an SSL certificate or keytab from the domain. (Only the public key for the certificate is stored in the service record. The private key is local to the service.) An IdM domain establishes a commonality between machines, with common identity information, common policies, and shared services. Any machine which belongs to a domain functions as a client of the domain, which means it uses the services that the domain provides. An IdM domain provides three main services specifically for machines: DNS Kerberos Certificate management Like users, machines are an identity that is managed by IdM. Client machines use DNS to identify IdM servers, services, and domain members. These are, like user identities, stored in the 389 Directory Server instance for the IdM server. Like users, machines can be authenticated to the domain using Kerberos or certificates. From the machine perspective, there are several tasks that can be performed that access these domain services: Joining the DNS domain ( machine enrollment ) Managing DNS entries and zones Managing machine authentication Authentication in IdM includes machines as well as users. Machine authentication is required for the IdM server to trust the machine and to accept IdM connections from the client software installed on that machine. After authenticating the client, the IdM server can respond to its requests. IdM supports three different approaches to machine authentication: SSH keys. The SSH public key for the host is created and uploaded to the host entry. From there, the System Security Services Daemon (SSSD) uses IdM as an identity provider and can work in conjunction with OpenSSH and other services to reference the public keys located centrally in Identity Management. This is described in Section 12.5, "Managing Public SSH Keys for Hosts" . Key tables (or keytabs , a symmetric key resembling to some extent a user password) and machine certificates. Kerberos tickets are generated as part of the Kerberos services and policies defined by the server. Initially granting a Kerberos ticket, renewing the Kerberos credentials, and even destroying the Kerberos session are all handled by the IdM services. Managing Kerberos is covered in Chapter 29, Managing the Kerberos Domain . Machine certificates. In this case, the machine uses an SSL certificate that is issued by the IdM server's certificate authority and then stored in IdM's Directory Server. The certificate is then sent to the machine to present when it authenticates to the server. On the client, certificates are managed by a service called certmonger . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/hosts |
Chapter 6. Cluster Operators reference | Chapter 6. Cluster Operators reference This reference guide indexes the cluster Operators shipped by Red Hat that serve as the architectural foundation for OpenShift Container Platform. Cluster Operators are installed by default, unless otherwise noted, and are managed by the Cluster Version Operator (CVO). For more details on the control plane architecture, see Operators in OpenShift Container Platform . Cluster administrators can view cluster Operators in the OpenShift Container Platform web console from the Administration Cluster Settings page. Note Cluster Operators are not managed by Operator Lifecycle Manager (OLM) and OperatorHub. OLM and OperatorHub are part of the Operator Framework used in OpenShift Container Platform for installing and running optional add-on Operators . 6.1. Cloud Credential Operator Purpose The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Project openshift-cloud-credential-operator CRDs credentialsrequests.cloudcredential.openshift.io Scope: Namespaced CR: CredentialsRequest Validation: Yes Configuration objects No configuration required. Additional resources CredentialsRequest custom resource About the Cloud Credential Operator 6.2. Cluster Authentication Operator Purpose The Cluster Authentication Operator installs and maintains the Authentication custom resource in a cluster and can be viewed with: USD oc get clusteroperator authentication -o yaml Project cluster-authentication-operator 6.3. Cluster Autoscaler Operator Purpose The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider. Project cluster-autoscaler-operator CRDs ClusterAutoscaler : This is a singleton resource, which controls the configuration autoscaler instance for the cluster. The Operator only responds to the ClusterAutoscaler resource named default in the managed namespace, the value of the WATCH_NAMESPACE environment variable. MachineAutoscaler : This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, the min and max size. Currently only MachineSet objects can be targeted. 6.4. Cluster Cloud Controller Manager Operator Purpose Note This Operator is only fully supported for Azure Stack Hub. It is available as a Technology Preview for Amazon Web Services (AWS), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP). The Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of OpenShift Container Platform. The Operator is based on the Kubebuilder framework and controller-runtime libraries. It is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Cloud configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-cloud-controller-manager-operator 6.5. Cluster Config Operator Purpose The Cluster Config Operator performs the following tasks related to config.openshift.io : Creates CRDs. Renders the initial custom resources. Handles migrations. Project cluster-config-operator 6.6. Cluster CSI Snapshot Controller Operator Purpose The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Project cluster-csi-snapshot-controller-operator 6.7. Cluster Image Registry Operator Purpose The Cluster Image Registry Operator manages a singleton instance of the OpenShift Container Platform registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. Project cluster-image-registry-operator 6.8. Cluster Machine Approver Operator Purpose The Cluster Machine Approver Operator automatically approves the CSRs requested for a new worker node after cluster installation. Note For the control plane node, the approve-csr service on the bootstrap node automatically approves all CSRs during the cluster bootstrapping phase. Project cluster-machine-approver-operator 6.9. Cluster Monitoring Operator Purpose The Cluster Monitoring Operator manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform. Project openshift-monitoring CRDs alertmanagers.monitoring.coreos.com Scope: Namespaced CR: alertmanager Validation: Yes prometheuses.monitoring.coreos.com Scope: Namespaced CR: prometheus Validation: Yes prometheusrules.monitoring.coreos.com Scope: Namespaced CR: prometheusrule Validation: Yes servicemonitors.monitoring.coreos.com Scope: Namespaced CR: servicemonitor Validation: Yes Configuration objects USD oc -n openshift-monitoring edit cm cluster-monitoring-config 6.10. Cluster Network Operator Purpose The Cluster Network Operator installs and upgrades the networking components on an OpenShift Container Platform cluster. 6.11. Cluster Samples Operator Purpose The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. The Cluster Samples Operator deployment is contained within the openshift-cluster-samples-operator namespace. On start up, the install pull secret is used by the image stream import logic in the internal registry and API server to authenticate with registry.redhat.io . An administrator can create any additional secrets in the openshift namespace if they change the registry used for the sample image streams. If created, those secrets contain the content of a config.json for docker needed to facilitate image import. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. After the Cluster Samples Operator creates a sample, it adds an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches the compatible release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator are allowed as long as the version annotation is not modified or deleted. However, on an upgrade, as the version annotation will change, those modifications can get replaced as the sample will be updated with the newer version. The Jenkins images are part of the image payload from the installation and are tagged into the image streams directly. The samples resource includes a finalizer, which cleans up the following upon its deletion: Operator-managed image streams Operator-managed templates Operator-generated configuration resources Cluster status resources Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. Project cluster-samples-operator 6.12. Cluster Storage Operator Purpose The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storage class exists for OpenShift Container Platform clusters. Project cluster-storage-operator Configuration No configuration is required. Notes The Cluster Storage Operator supports Amazon Web Services (AWS) and Red Hat OpenStack Platform (RHOSP). The created storage class can be made non-default by editing its annotation, but the storage class cannot be deleted as long as the Operator runs. 6.13. Cluster Version Operator Purpose Cluster Operators manage specific areas of cluster functionality. The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. The CVO also checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. Project cluster-version-operator Additional resources Operators in OpenShift Container Platform 6.14. Console Operator Purpose The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. Project console-operator 6.15. DNS Operator Purpose The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. The Operator creates a working default deployment based on the cluster's configuration. The default cluster domain is cluster.local . Configuration of the CoreDNS Corefile or Kubernetes plugin is not yet supported. The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster. Project cluster-dns-operator 6.16. etcd cluster Operator Purpose The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures. Project cluster-etcd-operator CRDs etcds.operator.openshift.io Scope: Cluster CR: etcd Validation: Yes Configuration objects USD oc edit etcd cluster 6.17. Ingress Operator Purpose The Ingress Operator configures and manages the OpenShift Container Platform router. Project openshift-ingress-operator CRDs clusteringresses.ingress.openshift.io Scope: Namespaced CR: clusteringresses Validation: No Configuration objects Cluster config Type Name: clusteringresses.ingress.openshift.io Instance Name: default View Command: USD oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml Notes The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router: USD oc get deployment -n openshift-ingress The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed ingress controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr , then the ingress controller operate in IPv6-only mode. In the following example, ingress controllers managed by the Ingress Operator will run in IPv4-only mode because only one cluster network exists and the network is an IPv4 cidr : USD oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}' Example output map[cidr:10.128.0.0/14 hostPrefix:23] 6.18. Insights Operator Purpose The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Project insights-operator Configuration No configuration is required. Notes Insights Operator compliments OpenShift Container Platform Telemetry. Additional resources About remote health monitoring for details about Insights Operator and Telemetry 6.19. Kubernetes API Server Operator Purpose The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed using the Cluster Version Operator (CVO). Project openshift-kube-apiserver-operator CRDs kubeapiservers.operator.openshift.io Scope: Cluster CR: kubeapiserver Validation: Yes Configuration objects USD oc edit kubeapiserver 6.20. Kubernetes Controller Manager Operator Purpose The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift Container Platform library-go framework and it is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-controller-manager-operator 6.21. Kubernetes Scheduler Operator Purpose The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO). The Kubernetes Scheduler Operator contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-scheduler-operator Configuration The configuration for the Kubernetes Scheduler is the result of merging: a default configuration. an observed configuration from the spec schedulers.config.openshift.io . All of these are sparse configurations, invalidated JSON snippets which are merged to form a valid configuration at the end. 6.22. Kubernetes Storage Version Migrator Operator Purpose The Kubernetes Storage Version Migrator Operator detects changes of the default storage version, creates migration requests for resource types when the storage version changes, and processes migration requests. Project cluster-kube-storage-version-migrator-operator 6.23. Machine API Operator Purpose The Machine API Operator manages the lifecycle of specific purpose custom resource definitions (CRD), controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster. Project machine-api-operator CRDs MachineSet Machine MachineHealthCheck 6.24. Machine Config Operator Purpose The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet. There are four components: machine-config-server : Provides Ignition configuration to new machines joining the cluster. machine-config-controller : Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually. machine-config-daemon : Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. machine-config : Provides a complete source of machine configuration at installation, first start up, and updates for a machine. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources About the OpenShift SDN network plugin . Project openshift-machine-config-operator 6.25. Marketplace Operator Purpose The Marketplace Operator is a conduit to bring off-cluster Operators to your cluster. Project operator-marketplace 6.26. Node Tuning Operator Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Project cluster-node-tuning-operator 6.27. OpenShift API Server Operator Purpose The OpenShift API Server Operator installs and maintains the openshift-apiserver on a cluster. Project openshift-apiserver-operator CRDs openshiftapiservers.operator.openshift.io Scope: Cluster CR: openshiftapiserver Validation: Yes 6.28. OpenShift Controller Manager Operator Purpose The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager custom resource in a cluster and can be viewed with: USD oc get clusteroperator openshift-controller-manager -o yaml The custom resource definitino (CRD) openshiftcontrollermanagers.operator.openshift.io can be viewed in a cluster with: USD oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml Project cluster-openshift-controller-manager-operator 6.29. Operator Lifecycle Manager Operators Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 6.1. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.9, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. CRDs Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 6.1. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 6.2. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. Additional resources Understanding Operator Lifecycle Manager (OLM) 6.30. OpenShift Service CA Operator Purpose The OpenShift Service CA Operator mints and manages serving certificates for Kubernetes services. Project openshift-service-ca-operator 6.31. vSphere Problem Detector Operator Purpose The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. Note The vSphere Problem Detector Operator is only started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. Configuration No configuration is required. Notes The Operator supports OpenShift Container Platform installations on vSphere. The Operator uses the vsphere-cloud-credentials to communicate with vSphere. The Operator performs checks that are related to storage. Additional resources Using the vSphere Problem Detector Operator | [
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/operators/cluster-operators-ref |
Developing automation content | Developing automation content Red Hat Ansible Automation Platform 2.5 Develop Ansible automation content to run automation jobs Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/developing_automation_content/index |
1.3. Storage Array Support | 1.3. Storage Array Support By default, DM Multipath includes support for the most common storage arrays that themselves support DM Multipath. For information on the default configuration values, including supported devices, run either of the following commands. If your storage array supports DM Multipath and is not configured by default, you may need to add it to the DM Multipath configuration file, multipath.conf . For information on the DM Multipath configuration file, see Chapter 4, The DM Multipath Configuration File . Some storage arrays require special handling of I/O errors and path switching. These require separate hardware handler kernel modules. | [
"multipathd show config multipath -t"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/storage_support |
Chapter 6. Creating a boot ISO installer image with RHEL image builder | Chapter 6. Creating a boot ISO installer image with RHEL image builder You can use RHEL image builder to create bootable ISO Installer images. These images consist of a .tar file that has a root file system. You can use the bootable ISO image to install the file system to a bare metal server. RHEL image builder builds a manifest that creates a boot ISO that contains a root file system. To create the ISO image, select the image type image-installer . RHEL image builder builds a .tar file with the following content: a standard Anaconda installer ISO an embedded RHEL system tar file a default Kickstart file that installs the commit with minimal default requirements The created installer ISO image includes a pre-configured system image that you can install directly to a bare metal server. 6.1. Creating a boot ISO installer image using the RHEL image builder CLI You can create a customized boot ISO installer image by using the RHEL image builder command-line interface. As a result, image builder builds an .iso file that contains a .tar file, which you can install for the Operating system. The .iso file is set up to boot Anaconda and install the .tar file to set up the system. You can use the created ISO image file on a hard disk or to boot in a virtual machine, for example, in an HTTP Boot or a USB installation. Warning The Installer ( .iso ) image type does not accept partitions customization. If you try to manually configure the filesystem customization, it is not applied to any system built by the Installer image. Mounting an ISO image built with RHEL image builder file system customizations causes an error in the Kickstart, and the installation does not reboot automatically. For more information, see the Red Hat Knowledgebase solution Automate a RHEL ISO installation generated by image builder . Prerequisites You have created a blueprint for the image and customized it with a user included and pushed it back into RHEL image builder. See Blueprint customizations . Procedure Create the ISO image: BLUEPRINT-NAME with name of the blueprint you created image-installer is the image type The compose process starts in the background and the UUID of the compose is shown. Wait until the compose is finished. This might take several minutes. Check the status of the compose: A finished compose shows a status value of FINISHED . Identify the compose in the list by its UUID. After the compose is finished, download the created image file to the current directory: Replace UUID with the UUID value obtained in the steps. RHEL image builder builds a .iso file that contains a .tar file. The .tar file is the image that will be installed for the Operating system. The . iso is set up to boot Anaconda and install the .tar file to set up the system. steps In the directory where you downloaded the image file. Locate the .iso image you downloaded. Mount the ISO. You can find the .tar file at the /mnt/liveimg.tar.gz directory. List the .tar file content: Additional resources Creating system images with RHEL image builder command-line interface Creating a bootable installation medium for RHEL 6.2. Creating a boot ISO installer image by using RHEL image builder in the GUI You can build a customized boot ISO installer image by using the RHEL image builder GUI. You can use the resulting ISO image file on a hard disk or boot it in a virtual machine. For example, in an HTTP Boot or a USB installation. Warning The Installer ( .iso ) image type does not accept partitions customization. If you try to manually configure the filesystem customization, it is not applied to any system built by the Installer image. Mounting an ISO image built with RHEL image builder file system customizations causes an error in the Kickstart, and the installation does not reboot automatically. For more information, see the Red Hat Knowledgebase solution Automate a RHEL ISO installation generated by image builder . Prerequisites You have opened the RHEL image builder app from the web console in a browser. You have created a blueprint for your image. See Creating a RHEL image builder blueprint in the web console interface . Procedure On the RHEL image builder dashboard, locate the blueprint that you want to use to build your image. Optionally, enter the blueprint name or a part of it into the search box at upper left, and click Enter . On the right side of the blueprint, click the corresponding Create Image button. The Create image dialog wizard opens. On the Create image dialog wizard: In the Image Type list, select "RHEL Installer (.iso)" . Click . On the Review tab, click Create . RHEL image builder adds the compose of a RHEL ISO image to the queue. After the process is complete, you can see the image build complete status. RHEL image builder creates the ISO image. Verification After the image is successfully created, you can download it. Click Download to save the "RHEL Installer (.iso)" image to your system. Navigate to the folder where you downloaded the "RHEL Installer (.iso)" image. Locate the .tar image you downloaded. Extract the "RHEL Installer (.iso)" image content. Additional resources Creating a RHEL image builder blueprint in the web console interface Creating system images with RHEL image builder command-line interface Creating a bootable installation medium for RHEL 6.3. Installing a bootable ISO to a media and booting it Install the bootable ISO image you created by using RHEL image builder to a bare metal system. Prerequisites You created the bootable ISO image by using RHEL image builder. See Creating a boot ISO installer image using the RHEL image builder on the command line . You have downloaded the bootable ISO image. You installed the dd tool. You have a USB flash drive with enough capacity for the ISO image. The required size varies depending on the packages you selected in your blueprint, but the recommended minimum size is 8 GB. Procedure Write the bootable ISO image directly to the USB drive using the dd tool. For example: Where installer.iso is the ISO image file name and /dev/sdX is your USB flash drive device path. Insert the flash drive into a USB port of the computer you want to boot. Boot the ISO image from the USB flash drive. When the installation environment starts, you might need to complete the installation manually, similarly to the default Red Hat Enterprise Linux installation. Additional resources Booting the installation media Customizing your installation Creating a bootable USB device on Linux | [
"composer-cli compose start BLUEPRINT-NAME image-installer",
"composer-cli compose status",
"composer-cli compose list",
"composer-cli compose image UUID",
"mount -o ro path_to_ISO /mnt",
"tar ztvf /mnt/liveimg.tar.gz",
"tar -xf content .tar",
"dd if=installer.iso of=/dev/sdX"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_a_customized_rhel_system_image/creating-a-boot-iso-installer-image-with-image-builder_composing-a-customized-rhel-system-image |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/migrating_to_data_grid_8/making-open-source-more-inclusive_datagrid |
5.4. Virtual Memory | 5.4. Virtual Memory 5.4.1. Hot Plugging Virtual Memory You can hot plug virtual memory. Hot plugging means enabling or disabling devices while a virtual machine is running. Each time memory is hot plugged, it appears as a new memory device in the Vm Devices tab in the details view of the virtual machine, up to a maximum of 16 available slots. When the virtual machine is restarted, these devices are cleared from the Vm Devices tab without reducing the virtual machine's memory, allowing you to hot plug more memory devices. If the hot plug fails (for example, if there are no more available slots), the memory increase will be applied when the virtual machine is restarted. Important This feature is currently not supported for the self-hosted engine Manager virtual machine. Hot Plugging Virtual Memory Click Compute Virtual Machines and select a running virtual machine. Click Edit . Click the System tab. Increase the Memory Size by entering the total amount required. Memory can be added in multiples of 256 MB. By default, the maximum memory allowed for the virtual machine is set to 4x the memory size specified. Though the value is changed in the user interface, the maximum value is not hot plugged, and you will see the pending changes icon ( ). To avoid that, you can change the maximum memory back to the original value. Click OK . This action opens the Pending Virtual Machine changes window, as some values such as maxMemorySizeMb and minAllocatedMem will not change until the virtual machine is restarted. However, the hot plug action is triggered by the change to the Memory Size value, which can be applied immediately. Click OK . The virtual machine's Defined Memory is updated in the General tab in the details view. You can see the newly added memory device in the Vm Devices tab in the details view. 5.4.2. Hot Unplugging Virtual Memory You can hot unplug virtual memory. Hot unplugging means disabling devices while a virtual machine is running. Important Only memory added with hot plugging can be hot unplugged. The virtual machine operating system must support memory hot unplugging. The virtual machines must not have a memory balloon device enabled. This feature is disabled by default. All blocks of the hot-plugged memory must be set to online_movable in the virtual machine's device management rules. In virtual machines running up-to-date versions of Red Hat Enterprise Linux or CoreOS, this rule is set by default. For information on device management rules, consult the documentation for the virtual machine's operating system. If any of these conditions are not met, the memory hot unplug action may fail or cause unexpected behavior. Hot Unplugging Virtual Memory Click Compute Virtual Machines and select a running virtual machine. Click the Vm Devices tab. In the Hot Unplug column, click Hot Unplug beside the memory device to be removed. Click OK in the Memory Hot Unplug window. The Physical Memory Guaranteed value for the virtual machine is decremented automatically if necessary. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-Virtual_Memory |
14.4. Retrieving ACLs | 14.4. Retrieving ACLs To determine the existing ACLs for a file or directory, use the getfacl command: It returns output similar to the following: If a directory is specified, and it has a default ACL, the default ACL is also displayed such as: | [
"getfacl <filename>",
"file: file owner: andrius group: andrius user::rw- user:smoore:r-- group::r-- mask::r-- other::r--",
"file: file owner: andrius group: andrius user::rw- user:smoore:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:andrius:rwx default:group::r-x default:mask::rwx default:other::r-x"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/access_control_lists-retrieving_acls |
5.4. Renewing Certificates | 5.4. Renewing Certificates This section discusses how to renew certificates. For more information on how to set up certificate renewal, see Section 3.4, "Configuring Profiles to Enable Renewal" . Renewing a certificate consists in regenerating the certificate with the same properties to be used for the same purpose as the original certificate. In general, there are two types of renewals: Same key Renewal takes the original key, profile, and request of the certificate and recreates a new certificate with a new validity period and expiration date using the identical key. This can be done by either of the following methods: resubmitting the original certificate request (CSR) through the original profile, or regenerating a CSR with the original keys by using supporting tools such as certutil Re-keying a certificate requires regeneration of a certificate request with the same information, so that a new key pair is generated. The CSR is then submitted through the original profile. 5.4.1. Same Keys Renewal 5.4.1.1. Reusing CSR There are three approval methods for same key renewal at the end entity portal. Agent-approved method requires submitting the serial number of the certificate to be renewed; This method would require a CA agent's approval. Directory-based renewal requires submitting the serial number of the certificate to be renewed, and the CA draws the information from its current certificate directory entry. The certificate is automatically approved if the ldap uid/pwd is authenticated successfully. Certificate-based renewal uses the certificate in the browser database to authenticate and have the same certificate re-issued. 5.4.1.1.1. Agent-Approved or Directory-Based Renewals Sometimes, a certificate renewal request has to be manually approved, either by a CA agent or by providing login information for the user directory. Open the end-entities services page for the CA which issued the certificate (or its clone). Click the name of the renewal form to use. Enter the serial number of the certificate to renew. This can be in decimal or hexadecimal form. Click the renew button. The request is submitted. For directory-based renewals, the renewed certificate is automatically returned. Otherwise, the renewal request will be approved by an agent. 5.4.1.1.2. Certificate-Based Renewal Some user certificates are stored directly in your browser, so some renewal forms will simply check your browser certificate database for a certificate to renew. If a certificate can be renewed, then the CA automatically approved and reissued it. Important If the certificate which is being renewed has already expired, then it probably cannot be used for certificate-based renewal. The browser client may disallow any SSL client authentication with an expired certificate. In that case, the certificate must be renewed using one of the other renewal methods. Open the end-entities services page for the CA which issued the certificate (or its clone). Click the name of the renewal form to use. There is no input field, so click the Renew button. When prompted, select the certificate to renew. The request is submitted and the renewed certificate is automatically returned. 5.4.1.2. Renewal by generating CSR with same keys Sometimes, the original CSR might not be available. The certutil tool allows one to regenerate a CSR with the same keys, provided that the key pair is in the NSS database. This can be achieved by doing the following: Find the corresponding key id in the NSS db: Generate a CSR using a specific key: Alternatively, instead of keyid , if a key is associated with a certificate in the NSS db, nickname could be used: Generate a CSR using an existing nickname: 5.4.2. Renewal by Re-keying Certificates Since renewal by re-keying is basically generating a new CSR with the same info as the old certificate, just follow any one of the methods described in Section 5.2, "Creating Certificate Signing Requests" . Be mindful to enter the same information as the old certificate. | [
"http s ://server.example.com: 8443/ca/ee/ca",
"http s ://server.example.com: 8443/ca/ee/ca",
"Certutil -d <nssdb dir> -K",
"Certutil -d <nssdb dir> -R -k <key id> -s <subject DN> -o <CSR output file>",
"Certutil -d <nssdb dir> -R -k <nickname> -s <subject DN> -o <CSR output file>"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/renewing-certificates |
probe::scheduler.process_fork | probe::scheduler.process_fork Name probe::scheduler.process_fork - Process forked Synopsis scheduler.process_fork Values name name of the probe point parent_pid PID of the parent process child_pid PID of the child process | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-process-fork |
Chapter 4. Quay.io user interface overview | Chapter 4. Quay.io user interface overview The user interface (UI) of Quay.io is a fundamental component that serves as the user's gateway to managing and interacting with container images within the platform's ecosystem. Quay.io's UI is designed to provide an intuitive and user-friendly interface, making it easy for users of all skill levels to navigate and harness Quay.io's features and functionalities. This documentation section aims to introduce users to the key elements and functionalities of Quay.io's UI. It will cover essential aspects such as the UI's layout, navigation, and key features, providing a solid foundation for users to explore and make the most of Quay.io's container registry service. Throughout this documentation, step-by-step instructions, visual aids, and practical examples are provided on the following topics: Exploring applications and repositories Using the Quay.io tutorial Pricing and Quay.io plans Signing in and using Quay.io features Collectively, this document ensures that users can quickly grasp the UI's nuances and successfully navigate their containerization journey with Quay.io. 4.1. Quay.io landing page The Quay.io landing page serves as the central hub for users to access the container registry services offered. This page provides essential information and links to guide users in securely storing, building, and deploying container images effortlessly. The landing page of Quay.io includes links to the following resources: Explore . On this page, you can search the Quay.io database for various applications and repositories. Tutorial . On this page, you can take a step-by-step walkthrough that shows you how to use Quay.io. Pricing . On this page, you can learn about the various pricing tiers offered for Quay.io. There are also various FAQs addressed on this page. Sign in . By clicking this link, you are re-directed to sign into your Quay.io repository. . The landing page also includes information about scheduled maintenance. During scheduled maintenance, Quay.io is operational in read-only mode, and pulls function as normal. Pushes and builds are non-operational during scheduled maintenance. You can subscribe to updates regarding Quay.io maintenance by navigating to Quay.io Status page and clicking Subscribe To Updates . The landing page also includes links to the following resources: Documentation . This page provides documentation for using Quay.io. Terms . This page provides legal information about Red Hat Online Services. Privacy . This page provides information about Red Hat's Privacy Statement. Security . this page provides information about Quay.io security, including SSL/TLS, encryption, passwords, access controls, firewalls, and data resilience. About . This page includes information about packages and projects used and a brief history of the product. Contact . This page includes information about support and contacting the Red Hat Support Team. All Systems Operational . This page includes information the status of Quay.io and a brief history of maintenance. Cookies. By clicking this link, a popup box appears that allows you to set your cookie preferences. . You can also find information about Trying Red Hat Quay on premise or Trying Red Hat Quay on the cloud , which redirects you to the Pricing page. Each option offers a free trial. 4.1.1. Creating a Quay.io account New users of Quay.io are required to both Register for a Red Hat account and create a Quay.io username. These accounts are correlated, with two distinct differences: The Quay.io account can be used to push and pull container images or Open Container Initiative images to Quay.io to store images. The Red Hat account provides users access to the Quay.io user interface. For paying customers, this account can also be used to access images from the Red Hat Ecosystem Catalog , which can be pushed to their Quay.io repository. Users must first register for a Red Hat account, and then create a Quay.io account. Users need both accounts to properly use all features of Quay.io. 4.1.1.1. Registering for a Red Hat Account Use the following procedure to register for a Red Hat account for Quay.io. Procedure Navigate to the Red Hat Customer Portal . In navigation pane, click Log In . When navigated to the log in page, click Register for a Red Hat Account . Enter a Red Hat login ID. Enter a password. Enter the following personal information: First name Last name Email address Phone number Enter the following contact information that is relative to your country or region. For example: Country/region Address Postal code City County Select and agree to Red Hat's terms and conditions. Click Create my account . Navigate to Quay.io and log in. 4.1.1.2. Creating a Quay.io user account Use the following procedure to create a Quay.io user account. Prerequisites You have created a Red Hat account. Procedure If required, resolve the captcha by clicking I am not a robot and confirming. You are redirected to a Confirm Username page. On the Confirm Username page, enter a username. By default, a username is generated. If the same username already exists, a number is added at the end to make it unique. This username is be used as a namespace in the Quay Container Registry. After deciding on a username, click Confirm Username . You are redirected to the Quay.io Repositories page, which serves as a dedicated hub where users can access and manage their repositories with ease. From this page, users can efficiently organize, navigate, and interact with their container images and related resources. 4.1.1.3. Quay.io Single Sign On support Red Hat Single Sign On (SSO) can be used with Quay.io. Use the following procedure to set up Red Hat SSO with Quay.io. For most users, these accounts are already linked. However, for some legacy Quay.io users, this procedure might be required. Prerequisites You have created a Quay.io account. Procedure Navigate to to the Quay.io Recovery page . Enter your username and password, then click Sign in to Quay Container Registry . In the navigation pane, click your username Account Settings . In the navigation pane, click External Logins and Applications . Click Attach to Red Hat . If you are already signed into Red Hat SSO, your account is automatically linked. Otherwise, you are prompted to sign into Red Hat SSO by entering your Red Hat login or email, and the password. Alternatively, you might need to create a new account first. After signing into Red Hat SSO, you can choose to authenticate against Quay.io using your Red Hat account from the login page. Additional resources For more information, see Quay.io Now Supports Red Hat Single Sign On . 4.1.2. Exploring Quay.io The Quay.io Explore page is a valuable hub that allows users to delve into a vast collection of container images, applications, and repositories shared by the Quay.io community. With its intuitive and user-friendly design, the Explore page offers a powerful search function, enabling users to effortlessly discover containerized applications and resources. 4.1.3. Trying Quay.io (deprecated) Note The Red Hat Quay tutorial is currently deprecated and will be removed when the v2 UI goes generally available (GA). The Quay.io Tutorial page offers users and introduction to the Quay.io container registry service. By clicking Continue Tutorial users learn how to perform the following features on Quay.io: Logging into Quay Container Registry from the Docker CLI Starting a container Creating images from a container Pushing a repository to Quay Container Registry Viewing a repository Setting up build triggers Changing a repository's permissions 4.1.4. Information about Quay.io pricing In addition to a free tier, Quay.io also offers several paid plans that have enhanced benefits. The Quay.io Pricing page offers information about Quay.io plans and the associated prices of each plan. The cost of each tier can be found on the Pricing page. All Quay.io plans include the following benefits: Continuous integration Public repositories Robot accounts Teams SSL/TLS encryption Logging and auditing Invoice history Quay.io subscriptions are handled by the Stripe payment processing platform. A valid credit card is required to sign up for Quay.io. To sign up for Quay.io, use the following procedure. Procedure Navigate to the Quay.io Pricing page . Decide on a plan, for example, Small , and click Buy Now . You are redirected to the Create New Organization page. Enter the following information: Organization Name Organization Email Optional. You can select a different plan if you want a plan larger, than, for example, Small . Resolve that captcha, and select Create Organization . You are redirected to Stripe. Enter the following information: Card information , including MM/YY and the CVC Name on card Country or region ZIP (if applicable) Check the box if you want your information to be saved. Phone Number Click Subscribe after all boxes have been filled. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/about_quay_io/quayio-ui-overview |
Chapter 1. Network Observability Operator release notes | Chapter 1. Network Observability Operator release notes The Network Observability Operator enables administrators to observe and analyze network traffic flows for OpenShift Container Platform clusters. These release notes track the development of the Network Observability Operator in the OpenShift Container Platform. For an overview of the Network Observability Operator, see About Network Observability Operator . 1.1. Network Observability Operator 1.8.0 The following advisory is available for the Network Observability Operator 1.8.0: Network Observability Operator 1.8.0 1.1.1. New features and enhancements 1.1.1.1. Packet translation You can now enrich network flows with translated endpoint information, showing not only the service but also the specific backend pod, so you can see which pod served a request. For more information, see Endpoint translation (xlat) and Working with endpoint translation (xlat) . 1.1.1.2. OVN-Kubernetes networking events tracking Important OVN-Kubernetes networking events tracking is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can now use network event tracking in Network Observability to gain insight into OVN-Kubernetes events, including network policies, admin network policies, and egress firewalls. For more information, see Viewing network events . 1.1.1.3. eBPF performance improvements in 1.8 Network Observability now uses hash maps instead of per-CPU maps. This means that network flows data is now tracked in the kernel space and new packets are also aggregated there. The de-duplication of network flows can now occur in the kernel, so the size of data transfer between the kernel and the user spaces yields better performance. With these eBPF performance improvements, there is potential to observe a CPU resource reduction between 40% and 57% in the eBPF Agent. 1.1.1.4. Network Observability CLI The following new features, options, and filters are added to the Network Observability CLI for this release: Capture metrics with filters enabled by running the oc netobserv metrics command. Run the CLI in the background by using the --background option with flows and packets capture and running oc netobserv follow to see the progress of the background run and oc netobserv copy to download the generated logs. Enrich flows and metrics capture with Machines, Pods, and Services subnets by using the --get-subnets option. New filtering options available with packets, flows, and metrics capture: eBPF filters on IPs, Ports, Protocol, Action, TCP Flags and more Custom nodes using --node-selector Drops only using --drops Any field using --regexes For more information, see Network Observability CLI reference . 1.1.2. Bug fixes Previously, the Network Observability Operator came with a "kube-rbac-proxy" container to manage RBAC for its metrics server. Since this external component is deprecated, it was necessary to remove it. It is now replaced with direct TLS and RBAC management through Kubernetes controller-runtime, without the need for a side-car proxy. ( NETOBSERV-1999 ) Previously in the OpenShift Container Platform console plugin, filtering on a key that was not equal to multiple values would not filter anything. With this fix, the expected results are returned, which is all flows not having any of the filtered values. ( NETOBSERV-1990 ) Previously in the OpenShift Container Platform console plugin with disabled Loki, it was very likely to generate a "Can't build query" error due to selecting an incompatible set of filters and aggregations. Now this error is avoided avoid by automatically disabling incompatible filters while still making the user aware of the filter incompatibility. ( NETOBSERV-1977 ) Previously, when viewing flow details from the console plugin, the ICMP info was always displayed in the side panel, showing "undefined" values for non-ICMP flows. With this fix, ICMP info is not displayed for non-ICMP flows. ( NETOBSERV-1969 ) Previously, the "Export data" link from the Traffic flows view did not work as intended, generating empty CSV reports. Now, the export feature is restored, generating non-empty CSV data. ( NETOBSERV-1958 ) Previously, it was possible to configure the FlowCollector with processor.logTypes Conversations , EndedConversations or All with loki.enable set to false , despite the conversation logs being only useful when Loki is enabled. This resulted in resource usage waste. Now, this configuration is invalid and is rejected by the validation webhook. ( NETOBSERV-1957 ) Configuring the FlowCollector with processor.logTypes set to All consumes much more resources, such as CPU, memory and network bandwidth, than the other options. This was previously not documented. It is now documented, and triggers a warning from the validation webhook. ( NETOBSERV-1956 ) Previously, under high stress, some flows generated by the eBPF agent were mistakenly dismissed, resulting in traffic bandwidth under-estimation. Now, those generated flows are not dismissed. ( NETOBSERV-1954 ) Previously, when enabling the network policy in the FlowCollector configuration, the traffic to the Operator webhooks was blocked, breaking the FlowMetrics API validation. Now traffic to the webhooks is allowed. ( NETOBSERV-1934 ) Previously, when deploying the default network policy, namespaces openshift-console and openshift-monitoring were set by default in the additionalNamespaces field, resulting in duplicated rules. Now there is no additional namespace set by default, which helps avoid getting duplicated rules.( NETOBSERV-1933 ) Previously from the OpenShift Container Platform console plugin, filtering on TCP flags would match flows having only the exact desired flag. Now, any flow having at least the desired flag appears in filtered flows. ( NETOBSERV-1890 ) When the eBPF agent runs in privileged mode and pods are continuously added or deleted, a file descriptor (FD) leak occurs. The fix ensures proper closure of the FD when a network namespace is deleted. ( NETOBSERV-2063 ) Previously, the CLI agent DaemonSet did not deploy on master nodes. Now, a toleration is added on the agent DaemonSet to schedule on every node when taints are set. Now, CLI agent DaemonSet pods run on all nodes. ( NETOBSERV-2030 ) Previously, the Source Resource and Source Destination filters autocomplete were not working when using Prometheus storage only. Now this issue is fixed and suggestions displays as expected. ( NETOBSERV-1885 ) Previously, a resource using multiple IPs was displayed separately in the Topology view. Now, the resource shows as a single topology node in the view. ( NETOBSERV-1818 ) Previously, the console refreshed the Network traffic table view contents when the mouse pointer hovered over the columns. Now, the the display is fixed, so row height remains constant with a mouse hover. ( NETOBSERV-2049 ) 1.1.3. Known issues If there is traffic that uses overlapping subnets in your cluster, there is a small risk that the eBPF Agent mixes up the flows from overlapped IPs. This can happen if different connections happen to have the exact same source and destination IPs and if ports and protocol are within a 5 seconds time frame and happening on the same node. This should not be possible unless you configured secondary networks or UDN. Even in that case, it is still very unlikely in usual traffic, as source ports are usually a good differentiator. ( NETOBSERV-2115 ) After selecting a type of exporter to configure in the FlowCollector resource spec.exporters section from the OpenShift Container Platform web console form view, the detailed configuration for that type does not show up in the form. The workaround is to configure directly the YAML. ( NETOBSERV-1981 ) 1.2. Network Observability Operator 1.7.0 The following advisory is available for the Network Observability Operator 1.7.0: Network Observability Operator 1.7.0 1.2.1. New features and enhancements 1.2.1.1. OpenTelemetry support You can now export enriched network flows to a compatible OpenTelemetry endpoint, such as the Red Hat build of OpenTelemetry. For more information see Export enriched network flow data . 1.2.1.2. Network Observability Developer perspective You can now use Network Observability in the Developer perspective. For more information, see OpenShift Container Platform console integration . 1.2.1.3. TCP flags filtering You can now use the tcpFlags filter to limit the volume of packets processed by the eBPF program. For more information, see Flow filter configuration parameters , eBPF flow rule filter , and Detecting SYN flooding using the FlowMetric API and TCP flags . 1.2.1.4. Network Observability for OpenShift Virtualization You can observe networking patterns on an OpenShift Virtualization setup by identifying eBPF-enriched network flows coming from VMs that are connected to secondary networks, such as through Open Virtual Network (OVN)-Kubernetes. For more information, see Configuring virtual machine (VM) secondary network interfaces for Network Observability . 1.2.1.5. Network policy deploys in the FlowCollector custom resource (CR) With this release, you can configure the FlowCollector CR to deploy a network policy for Network Observability. Previously, if you wanted a network policy, you had to manually create one. The option to manually create a network policy is still available. For more information, see Configuring an ingress network policy by using the FlowCollector custom resource . 1.2.1.6. FIPS compliance You can install and use the Network Observability Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 1.2.1.7. eBPF agent enhancements The following enhancements are available for the eBPF agent: If the DNS service maps to a different port than 53 , you can specify this DNS tracking port using spec.agent.ebpf.advanced.env.DNS_TRACKING_PORT . You can now use two ports for transport protocols (TCP, UDP, or SCTP) filtering rules. You can now filter on transport ports with a wildcard protocol by leaving the protocol field empty. For more information, see FlowCollector API specifications . 1.2.1.8. Network Observability CLI The Network Observability CLI ( oc netobserv ), is now generally available. The following enhancements have been made since the 1.6 Technology Preview release: * There are now eBPF enrichment filters for packet capture similar to flow capture. * You can now use filter tcp_flags with both flow and packets capture. * The auto-teardown option is available when max-bytes or max-time is reached. For more information, see Network Observability CLI and Network Observability CLI 1.7.0 . 1.2.2. Bug fixes Previously, when using a RHEL 9.2 real-time kernel, some of the webhooks did not work. Now, a fix is in place to check whether this RHEL 9.2 real-time kernel is being used. If the kernel is being used, a warning is displayed about the features that do not work, such as packet drop and neither Round-trip Time when using s390x architecture. The fix is in OpenShift 4.16 and later. ( NETOBSERV-1808 ) Previously, in the Manage panels dialog in the Overview tab, filtering on total , bar , donut , or line did not show a result. Now the available panels are correctly filtered. ( NETOBSERV-1540 ) Previously, under high stress, the eBPF agents were susceptible to enter into a state where they generated a high number of small flows, almost not aggregated. With this fix, the aggregation process is still maintained under high stress, resulting in less flows being created. This fix improves the resource consumption not only in the eBPF agent but also in flowlogs-pipeline and Loki. ( NETOBSERV-1564 ) Previously, when the workload_flows_total metric was enabled instead of the namespace_flows_total metric, the health dashboard stopped showing By namespace flow charts. With this fix, the health dashboard now shows the flow charts when the workload_flows_total is enabled. ( NETOBSERV-1746 ) Previously, when you used the FlowMetrics API to generate a custom metric and later modified its labels, such as by adding a new label, the metric stopped populating and an error was shown in the flowlogs-pipeline logs. With this fix, you can modify the labels, and the error is no longer raised in the flowlogs-pipeline logs. ( NETOBSERV-1748 ) Previously, there was an inconsistency with the default Loki WriteBatchSize configuration: it was set to 100 KB in the FlowCollector CRD default, and 10 MB in the OLM sample or default configuration. Both are now aligned to 10 MB, which generally provides better performances and less resource footprint. ( NETOBSERV-1766 ) Previously, the eBPF flow filter on ports was ignored if you did not specify a protocol. With this fix, you can set eBPF flow filters independently on ports and or protocols. ( NETOBSERV-1779 ) Previously, traffic from Pods to Services was hidden from the Topology view . Only the return traffic from Services to Pods was visible. With this fix, that traffic is correctly displayed. ( NETOBSERV-1788 ) Previously, non-cluster administrator users that had access to Network Observability saw an error in the console plugin when they tried to filter for something that triggered auto-completion, such as a namespace. With this fix, no error is displayed, and the auto-completion returns the expected results. ( NETOBSERV-1798 ) When the secondary interface support was added, you had to iterate multiple times to register the per network namespace with the netlink to learn about interface notifications. At the same time, unsuccessful handlers caused a leaking file descriptor because with TCX hook, unlike TC, handlers needed to be explicitly removed when the interface went down. Furthermore, when the network namespace was deleted, there was no Go close channel event to terminate the netlink goroutine socket, which caused go threads to leak. Now, there are no longer leaking file descriptors or go threads when you create or delete pods. ( NETOBSERV-1805 ) Previously, the ICMP type and value were displaying 'n/a' in the Traffic flows table even when related data was available in the flow JSON. With this fix, ICMP columns display related values as expected in the flow table. ( NETOBSERV-1806 ) Previously in the console plugin, it wasn't always possible to filter for unset fields, such as unset DNS latency. With this fix, filtering on unset fields is now possible. ( NETOBSERV-1816 ) Previously, when you cleared filters in the OpenShift web console plugin, sometimes the filters reappeared after you navigated to another page and returned to the page with filters. With this fix, filters do not unexpectedly reappear after they are cleared. ( NETOBSERV-1733 ) 1.2.3. Known issues WWhen you use the must-gather tool with Network Observability, logs are not collected when the cluster has FIPS enabled. ( NETOBSERV-1830 ) When the spec.networkPolicy is enabled in the FlowCollector , which installs a network policy on the netobserv namespace, it is impossible to use the FlowMetrics API. The network policy blocks calls to the validation webhook. As a workaround, use the following network policy: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-hostnetwork namespace: netobserv spec: podSelector: matchLabels: app: netobserv-operator ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: '' policyTypes: - Ingress ( NETOBSERV-193 ) 1.3. Network Observability Operator 1.6.2 The following advisory is available for the Network Observability Operator 1.6.2: 2024:7074 Network Observability Operator 1.6.2 1.3.1. CVEs CVE-2024-24791 1.3.2. Bug fixes When the secondary interface support was added, there was a need to iterate multiple times to register the per network namespace with the netlink to learn about interface notifications. At the same time, unsuccessful handlers caused a leaking file descriptor because with TCX hook, unlike TC, handlers needed to be explicitly removed when the interface went down. Now, there is no longer leaking file descriptors when creating and deleting pods. ( NETOBSERV-1805 ) 1.3.3. Known issues There was a compatibility issue with console plugins that would have prevented Network Observability from being installed on future versions of an OpenShift Container Platform cluster. By upgrading to 1.6.2, the compatibility issue is resolved and Network Observability can be installed as expected. ( NETOBSERV-1737 ) 1.4. Network Observability Operator 1.6.1 The following advisory is available for the Network Observability Operator 1.6.1: 2024:4785 Network Observability Operator 1.6.1 1.4.1. CVEs RHSA-2024:4237 RHSA-2024:4212 1.4.2. Bug fixes Previously, information about packet drops, such as the cause and TCP state, was only available in the Loki datastore and not in Prometheus. For that reason, the drop statistics in the OpenShift web console plugin Overview was only available with Loki. With this fix, information about packet drops is also added to metrics, so you can view drops statistics when Loki is disabled. ( NETOBSERV-1649 ) When the eBPF agent PacketDrop feature was enabled, and sampling was configured to a value greater than 1 , reported dropped bytes and dropped packets ignored the sampling configuration. While this was done on purpose, so as not to miss any drops, a side effect was that the reported proportion of drops compared with non-drops became biased. For example, at a very high sampling rate, such as 1:1000 , it was likely that almost all the traffic appears to be dropped when observed from the console plugin. With this fix, the sampling configuration is honored with dropped bytes and packets. ( NETOBSERV-1676 ) Previously, the SR-IOV secondary interface was not detected if the interface was created first and then the eBPF agent was deployed. It was only detected if the agent was deployed first and then the SR-IOV interface was created. With this fix, the SR-IOV secondary interface is detected no matter the sequence of the deployments. ( NETOBSERV-1697 ) Previously, when Loki was disabled, the Topology view in the OpenShift web console displayed the Cluster and Zone aggregation options in the slider beside the network topology diagram, even when the related features were not enabled. With this fix, the slider now only displays options according to the enabled features. ( NETOBSERV-1705 ) Previously, when Loki was disabled, and the OpenShift web console was first loading, an error would occur: Request failed with status code 400 Loki is disabled . With this fix, the errors no longer occur. ( NETOBSERV-1706 ) Previously, in the Topology view of the OpenShift web console, when clicking on the Step into icon to any graph node, the filters were not applied as required in order to set the focus to the selected graph node, resulting in showing a wide view of the Topology view in the OpenShift web console. With this fix, the filters are correctly set, effectively narrowing down the Topology . As part of this change, clicking the Step into icon on a Node now brings you to the Resource scope instead of the Namespaces scope. ( NETOBSERV-1720 ) Previously, when Loki was disabled, in the Topology view of the OpenShift web console with the Scope set to Owner , clicking on the Step into icon to any graph node would bring the Scope to Resource , which is not available without Loki, so an error message was shown. With this fix, the Step into icon is hidden in the Owner scope when Loki is disabled, so this scenario no longer occurs.( NETOBSERV-1721 ) Previously, when Loki was disabled, an error was displayed in the Topology view of the OpenShift web console when a group was set, but then the scope was changed so that the group becomes invalid. With this fix, the invalid group is removed, preventing the error. ( NETOBSERV-1722 ) When creating a FlowCollector resource from the OpenShift web console Form view , as opposed to the YAML view , the following settings were incorrectly managed by the web console: agent.ebpf.metrics.enable and processor.subnetLabels.openShiftAutoDetect . These settings can only be disabled in the YAML view , not in the Form view . To avoid any confusion, these settings have been removed from the Form view . They are still accessible in the YAML view . ( NETOBSERV-1731 ) Previously, the eBPF agent was unable to clean up traffic control flows installed before an ungraceful crash, for example a crash due to a SIGTERM signal. This led to the creation of multiple traffic control flow filters with the same name, since the older ones were not removed. With this fix, all previously installed traffic control flows are cleaned up when the agent starts, before installing new ones. ( NETOBSERV-1732 ) Previously, when configuring custom subnet labels and keeping the OpenShift subnets auto-detection enabled, OpenShift subnets would take precedence over the custom ones, preventing the definition of custom labels for in cluster subnets. With this fix, custom defined subnets take precedence, allowing the definition of custom labels for in cluster subnets. ( NETOBSERV-1734 ) 1.5. Network Observability Operator 1.6.0 The following advisory is available for the Network Observability Operator 1.6.0: Network Observability Operator 1.6.0 Important Before upgrading to the latest version of the Network Observability Operator, you must Migrate removed stored versions of the FlowCollector CRD . An automated solution to this workaround is planned with NETOBSERV-1747 . 1.5.1. New features and enhancements 1.5.1.1. Enhanced use of Network Observability Operator without Loki You can now use Prometheus metrics and rely less on Loki for storage when using the Network Observability Operator. For more information, see Network Observability without Loki . 1.5.1.2. Custom metrics API You can create custom metrics out of flowlogs data by using the FlowMetrics API. Flowlogs data can be used with Prometheus labels to customize cluster information on your dashboards. You can add custom labels for any subnet that you want to identify in your flows and metrics. This enhancement can also be used to more easily identify external traffic by using the new labels SrcSubnetLabel and DstSubnetLabel , which exists both in flow logs and in metrics. Those fields are empty when there is external traffic, which gives a way to identify it. For more information, see Custom metrics and FlowMetric API reference . 1.5.1.3. eBPF performance enhancements Experience improved performances of the eBPF agent, in terms of CPU and memory, with the following updates: The eBPF agent now uses TCX webhooks instead of TC. The NetObserv / Health dashboard has a new section that shows eBPF metrics. Based on the new eBPF metrics, an alert notifies you when the eBPF agent is dropping flows. Loki storage demand decreases significantly now that duplicated flows are removed. Instead of having multiple, individual duplicated flows per network interface, there is one de-duplicated flow with a list of related network interfaces. Important With the duplicated flows update, the Interface and Interface Direction fields in the Network Traffic table are renamed to Interfaces and Interface Directions , so any bookmarked Quick filter queries using these fields need to be updated to interfaces and ifdirections . For more information, see Using the eBPF agent alert and Quick filters . 1.5.1.4. eBPF collection rule-based filtering You can use rule-based filtering to reduce the volume of created flows. When this option is enabled, the Netobserv / Health dashboard for eBPF agent statistics has the Filtered flows rate view. For more information, see eBPF flow rule filter . 1.5.2. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope 1.5.2.1. Network Observability CLI You can debug and troubleshoot network traffic issues without needing to install the Network Observability Operator by using the Network Observability CLI. Capture and visualize flow and packet data in real-time with no persistent storage requirement during the capture. For more information, see Network Observability CLI and Network Observability CLI 1.6.0 . 1.5.3. Bug fixes Previously, a dead link to the OpenShift containter platform documentation was displayed in the Operator Lifecycle Manager (OLM) form for the FlowMetrics API creation. Now the link has been updated to point to a valid page. ( NETOBSERV-1607 ) Previously, the Network Observability Operator description in the Operator Hub displayed a broken link to the documentation. With this fix, this link is restored. ( NETOBSERV-1544 ) Previously, if Loki was disabled and the Loki Mode was set to LokiStack , or if Loki manual TLS configuration was configured, the Network Observability Operator still tried to read the Loki CA certificates. With this fix, when Loki is disabled, the Loki certificates are not read, even if there are settings in the Loki configuration. ( NETOBSERV-1647 ) Previously, the oc must-gather plugin for the Network Observability Operator was only working on the amd64 architecture and failing on all others because the plugin was using amd64 for the oc binary. Now, the Network Observability Operator oc must-gather plugin collects logs on any architecture platform. Previously, when filtering on IP addresses using not equal to , the Network Observability Operator would return a request error. Now, the IP filtering works in both equal and not equal to cases for IP addresses and ranges. ( NETOBSERV-1630 ) Previously, when a user was not an admin, the error messages were not consistent with the selected tab of the Network Traffic view in the web console. Now, the user not admin error displays on any tab with improved display.( NETOBSERV-1621 ) 1.5.4. Known issues When the eBPF agent PacketDrop feature is enabled, and sampling is configured to a value greater than 1 , reported dropped bytes and dropped packets ignore the sampling configuration. While this is done on purpose to not miss any drops, a side effect is that the reported proportion of drops compared to non-drops becomes biased. For example, at a very high sampling rate, such as 1:1000 , it is likely that almost all the traffic appears to be dropped when observed from the console plugin. ( NETOBSERV-1676 ) In the Manage panels pop-up window in the Overview tab, filtering on total , bar , donut , or line does not show any result. ( NETOBSERV-1540 ) The SR-IOV secondary interface is not detected if the interface was created first and then the eBPF agent was deployed. It is only detected if the agent was deployed first and then the SR-IOV interface is created. ( NETOBSERV-1697 ) When Loki is disabled, the Topology view in the OpenShift web console always shows the Cluster and Zone aggregation options in the slider beside the network topology diagram, even when the related features are not enabled. There is no specific workaround, besides ignoring these slider options. ( NETOBSERV-1705 ) When Loki is disabled, and the OpenShift web console first loads, it might display an error: Request failed with status code 400 Loki is disabled . As a workaround, you can continue switching content on the Network Traffic page, such as clicking between the Topology and the Overview tabs. The error should disappear. ( NETOBSERV-1706 ) 1.6. Network Observability Operator 1.5.0 The following advisory is available for the Network Observability Operator 1.5.0: Network Observability Operator 1.5.0 1.6.1. New features and enhancements 1.6.1.1. DNS tracking enhancements In 1.5, the TCP protocol is now supported in addition to UDP. New dashboards are also added to the Overview view of the Network Traffic page. For more information, see Configuring DNS tracking and Working with DNS tracking . 1.6.1.2. Round-trip time (RTT) You can use TCP handshake Round-Trip Time (RTT) captured from the fentry/tcp_rcv_established Extended Berkeley Packet Filter (eBPF) hookpoint to read smoothed round-trip time (SRTT) and analyze network flows. In the Overview , Network Traffic , and Topology pages in web console, you can monitor network traffic and troubleshoot with RTT metrics, filtering, and edge labeling. For more information, see RTT Overview and Working with RTT . 1.6.1.3. Metrics, dashboards, and alerts enhancements The Network Observability metrics dashboards in Observe Dashboards NetObserv have new metrics types you can use to create Prometheus alerts. You can now define available metrics in the includeList specification. In releases, these metrics were defined in the ignoreTags specification. For a complete list of these metrics, see Network Observability Metrics . 1.6.1.4. Improvements for Network Observability without Loki You can create Prometheus alerts for the Netobserv dashboard using DNS, Packet drop, and RTT metrics, even if you don't use Loki. In the version of Network Observability, 1.4, these metrics were only available for querying and analysis in the Network Traffic , Overview , and Topology views, which are not available without Loki. For more information, see Network Observability Metrics . 1.6.1.5. Availability zones You can configure the FlowCollector resource to collect information about the cluster availability zones. This configuration enriches the network flow data with the topology.kubernetes.io/zone label value applied to the nodes. For more information, see Working with availability zones . 1.6.1.6. Notable enhancements The 1.5 release of the Network Observability Operator adds improvements and new capabilities to the OpenShift Container Platform web console plugin and the Operator configuration. Performance enhancements The spec.agent.ebpf.kafkaBatchSize default is changed from 10MB to 1MB to enhance eBPF performance when using Kafka. Important When upgrading from an existing installation, this new value is not set automatically in the configuration. If you monitor a performance regression with the eBPF Agent memory consumption after upgrading, you might consider reducing the kafkaBatchSize to the new value. Web console enhancements: There are new panels added to the Overview view for DNS and RTT: Min, Max, P90, P99. There are new panel display options added: Focus on one panel while keeping others viewable but with smaller focus. Switch graph type. Show Top and Overall . A collection latency warning is shown in the Custom time range pop-up window. There is enhanced visibility for the contents of the Manage panels and Manage columns pop-up windows. The Differentiated Services Code Point (DSCP) field for egress QoS is available for filtering QoS DSCP in the web console Network Traffic page. Configuration enhancements: The LokiStack mode in the spec.loki.mode specification simplifies installation by automatically setting URLs, TLS, cluster roles and a cluster role binding, as well as the authToken value. The Manual mode allows more control over configuration of these settings. The API version changes from flows.netobserv.io/v1beta1 to flows.netobserv.io/v1beta2 . 1.6.2. Bug fixes Previously, it was not possible to register the console plugin manually in the web console interface if the automatic registration of the console plugin was disabled. If the spec.console.register value was set to false in the FlowCollector resource, the Operator would override and erase the plugin registration. With this fix, setting the spec.console.register value to false does not impact the console plugin registration or registration removal. As a result, the plugin can be safely registered manually. ( NETOBSERV-1134 ) Previously, using the default metrics settings, the NetObserv/Health dashboard was showing an empty graph named Flows Overhead . This metric was only available by removing "namespaces-flows" and "namespaces" from the ignoreTags list. With this fix, this metric is visible when you use the default metrics setting. ( NETOBSERV-1351 ) Previously, the node on which the eBPF Agent was running would not resolve with a specific cluster configuration. This resulted in cascading consequences that culminated in a failure to provide some of the traffic metrics. With this fix, the eBPF agent's node IP is safely provided by the Operator, inferred from the pod status. Now, the missing metrics are restored. ( NETOBSERV-1430 ) Previously, the Loki error 'Input size too long' error for the Loki Operator did not include additional information to troubleshoot the problem. With this fix, help is directly displayed in the web console to the error with a direct link for more guidance. ( NETOBSERV-1464 ) Previously, the console plugin read timeout was forced to 30s. With the FlowCollector v1beta2 API update, you can configure the spec.loki.readTimeout specification to update this value according to the Loki Operator queryTimeout limit. ( NETOBSERV-1443 ) Previously, the Operator bundle did not display some of the supported features by CSV annotations as expected, such as features.operators.openshift.io/... With this fix, these annotations are set in the CSV as expected. ( NETOBSERV-1305 ) Previously, the FlowCollector status sometimes oscillated between DeploymentInProgress and Ready states during reconciliation. With this fix, the status only becomes Ready when all of the underlying components are fully ready. ( NETOBSERV-1293 ) 1.6.3. Known issues When trying to access the web console, cache issues on OCP 4.14.10 prevent access to the Observe view. The web console shows the error message: Failed to get a valid plugin manifest from /api/plugins/monitoring-plugin/ . The recommended workaround is to update the cluster to the latest minor version. If this does not work, you need to apply the workarounds described in this Red Hat Knowledgebase article .( NETOBSERV-1493 ) Since the 1.3.0 release of the Network Observability Operator, installing the Operator causes a warning kernel taint to appear. The reason for this error is that the Network Observability eBPF agent has memory constraints that prevent preallocating the entire hashmap table. The Operator eBPF agent sets the BPF_F_NO_PREALLOC flag so that pre-allocation is disabled when the hashmap is too memory expansive. 1.7. Network Observability Operator 1.4.2 The following advisory is available for the Network Observability Operator 1.4.2: 2023:6787 Network Observability Operator 1.4.2 1.7.1. CVEs 2023-39325 2023-44487 1.8. Network Observability Operator 1.4.1 The following advisory is available for the Network Observability Operator 1.4.1: 2023:5974 Network Observability Operator 1.4.1 1.8.1. CVEs 2023-44487 2023-39325 2023-29406 2023-29409 2023-39322 2023-39318 2023-39319 2023-39321 1.8.2. Bug fixes In 1.4, there was a known issue when sending network flow data to Kafka. The Kafka message key was ignored, causing an error with connection tracking. Now the key is used for partitioning, so each flow from the same connection is sent to the same processor. ( NETOBSERV-926 ) In 1.4, the Inner flow direction was introduced to account for flows between pods running on the same node. Flows with the Inner direction were not taken into account in the generated Prometheus metrics derived from flows, resulting in under-evaluated bytes and packets rates. Now, derived metrics are including flows with the Inner direction, providing correct bytes and packets rates. ( NETOBSERV-1344 ) 1.9. Network Observability Operator 1.4.0 The following advisory is available for the Network Observability Operator 1.4.0: RHSA-2023:5379 Network Observability Operator 1.4.0 1.9.1. Channel removal You must switch your channel from v1.0.x to stable to receive the latest Operator updates. The v1.0.x channel is now removed. 1.9.2. New features and enhancements 1.9.2.1. Notable enhancements The 1.4 release of the Network Observability Operator adds improvements and new capabilities to the OpenShift Container Platform web console plugin and the Operator configuration. Web console enhancements: In the Query Options , the Duplicate flows checkbox is added to choose whether or not to show duplicated flows. You can now filter source and destination traffic with One-way , Back-and-forth , and Swap filters. The Network Observability metrics dashboards in Observe Dashboards NetObserv and NetObserv / Health are modified as follows: The NetObserv dashboard shows top bytes, packets sent, packets received per nodes, namespaces, and workloads. Flow graphs are removed from this dashboard. The NetObserv / Health dashboard shows flows overhead as well as top flow rates per nodes, namespaces, and workloads. Infrastructure and Application metrics are shown in a split-view for namespaces and workloads. For more information, see Network Observability metrics and Quick filters . Configuration enhancements: You now have the option to specify different namespaces for any configured ConfigMap or Secret reference, such as in certificates configuration. The spec.processor.clusterName parameter is added so that the name of the cluster appears in the flows data. This is useful in a multi-cluster context. When using OpenShift Container Platform, leave empty to make it automatically determined. For more information, see Flow Collector sample resource and Flow Collector API Reference . 1.9.2.2. Network Observability without Loki The Network Observability Operator is now functional and usable without Loki. If Loki is not installed, it can only export flows to KAFKA or IPFIX format and provide metrics in the Network Observability metrics dashboards. For more information, see Network Observability without Loki . 1.9.2.3. DNS tracking In 1.4, the Network Observability Operator makes use of eBPF tracepoint hooks to enable DNS tracking. You can monitor your network, conduct security analysis, and troubleshoot DNS issues in the Network Traffic and Overview pages in the web console. For more information, see Configuring DNS tracking and Working with DNS tracking . 1.9.2.4. SR-IOV support You can now collect traffic from a cluster with Single Root I/O Virtualization (SR-IOV) device. For more information, see Configuring the monitoring of SR-IOV interface traffic . 1.9.2.5. IPFIX exporter support You can now export eBPF-enriched network flows to the IPFIX collector. For more information, see Export enriched network flow data . 1.9.2.6. Packet drops In the 1.4 release of the Network Observability Operator, eBPF tracepoint hooks are used to enable packet drop tracking. You can now detect and analyze the cause for packet drops and make decisions to optimize network performance. In OpenShift Container Platform 4.14 and later, both host drops and OVS drops are detected. In OpenShift Container Platform 4.13, only host drops are detected. For more information, see Configuring packet drop tracking and Working with packet drops . 1.9.2.7. s390x architecture support Network Observability Operator can now run on s390x architecture. Previously it ran on amd64 , ppc64le , or arm64 . 1.9.3. Bug fixes Previously, the Prometheus metrics exported by Network Observability were computed out of potentially duplicated network flows. In the related dashboards, from Observe Dashboards , this could result in potentially doubled rates. Note that dashboards from the Network Traffic view were not affected. Now, network flows are filtered to eliminate duplicates before metrics calculation, which results in correct traffic rates displayed in the dashboards. ( NETOBSERV-1131 ) Previously, the Network Observability Operator agents were not able to capture traffic on network interfaces when configured with Multus or SR-IOV, non-default network namespaces. Now, all available network namespaces are recognized and used for capturing flows, allowing capturing traffic for SR-IOV. There are configurations needed for the FlowCollector and SRIOVnetwork custom resource to collect traffic. ( NETOBSERV-1283 ) Previously, in the Network Observability Operator details from Operators Installed Operators , the FlowCollector Status field might have reported incorrect information about the state of the deployment. The status field now shows the proper conditions with improved messages. The history of events is kept, ordered by event date. ( NETOBSERV-1224 ) Previously, during spikes of network traffic load, certain eBPF pods were OOM-killed and went into a CrashLoopBackOff state. Now, the eBPF agent memory footprint is improved, so pods are not OOM-killed and entering a CrashLoopBackOff state. ( NETOBSERV-975 ) Previously when processor.metrics.tls was set to PROVIDED the insecureSkipVerify option value was forced to be true . Now you can set insecureSkipVerify to true or false , and provide a CA certificate if needed. ( NETOBSERV-1087 ) 1.9.4. Known issues Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the flowlogs-pipeline pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater. ( NETOBSERV-980 ) Currently, when spec.agent.ebpf.features includes DNSTracking, larger DNS packets require the eBPF agent to look for DNS header outside of the 1st socket buffer (SKB) segment. A new eBPF agent helper function needs to be implemented to support it. Currently, there is no workaround for this issue. ( NETOBSERV-1304 ) Currently, when spec.agent.ebpf.features includes DNSTracking, DNS over TCP packets requires the eBPF agent to look for DNS header outside of the 1st SKB segment. A new eBPF agent helper function needs to be implemented to support it. Currently, there is no workaround for this issue. ( NETOBSERV-1245 ) Currently, when using a KAFKA deployment model, if conversation tracking is configured, conversation events might be duplicated across Kafka consumers, resulting in inconsistent tracking of conversations, and incorrect volumetric data. For that reason, it is not recommended to configure conversation tracking when deploymentModel is set to KAFKA . ( NETOBSERV-926 ) Currently, when the processor.metrics.server.tls.type is configured to use a PROVIDED certificate, the operator enters an unsteady state that might affect its performance and resource consumption. It is recommended to not use a PROVIDED certificate until this issue is resolved, and instead using an auto-generated certificate, setting processor.metrics.server.tls.type to AUTO . ( NETOBSERV-1293 Since the 1.3.0 release of the Network Observability Operator, installing the Operator causes a warning kernel taint to appear. The reason for this error is that the Network Observability eBPF agent has memory constraints that prevent preallocating the entire hashmap table. The Operator eBPF agent sets the BPF_F_NO_PREALLOC flag so that pre-allocation is disabled when the hashmap is too memory expansive. 1.10. Network Observability Operator 1.3.0 The following advisory is available for the Network Observability Operator 1.3.0: RHSA-2023:3905 Network Observability Operator 1.3.0 1.10.1. Channel deprecation You must switch your channel from v1.0.x to stable to receive future Operator updates. The v1.0.x channel is deprecated and planned for removal in the release. 1.10.2. New features and enhancements 1.10.2.1. Multi-tenancy in Network Observability System administrators can allow and restrict individual user access, or group access, to the flows stored in Loki. For more information, see Multi-tenancy in Network Observability . 1.10.2.2. Flow-based metrics dashboard This release adds a new dashboard, which provides an overview of the network flows in your OpenShift Container Platform cluster. For more information, see Network Observability metrics . 1.10.2.3. Troubleshooting with the must-gather tool Information about the Network Observability Operator can now be included in the must-gather data for troubleshooting. For more information, see Network Observability must-gather . 1.10.2.4. Multiple architectures now supported Network Observability Operator can now run on an amd64 , ppc64le , or arm64 architectures. Previously, it only ran on amd64 . 1.10.3. Deprecated features 1.10.3.1. Deprecated configuration parameter setting The release of Network Observability Operator 1.3 deprecates the spec.Loki.authToken HOST setting. When using the Loki Operator, you must now only use the FORWARD setting. 1.10.4. Bug fixes Previously, when the Operator was installed from the CLI, the Role and RoleBinding that are necessary for the Cluster Monitoring Operator to read the metrics were not installed as expected. The issue did not occur when the operator was installed from the web console. Now, either way of installing the Operator installs the required Role and RoleBinding . ( NETOBSERV-1003 ) Since version 1.2, the Network Observability Operator can raise alerts when a problem occurs with the flows collection. Previously, due to a bug, the related configuration to disable alerts, spec.processor.metrics.disableAlerts was not working as expected and sometimes ineffectual. Now, this configuration is fixed so that it is possible to disable the alerts. ( NETOBSERV-976 ) Previously, when Network Observability was configured with spec.loki.authToken set to DISABLED , only a kubeadmin cluster administrator was able to view network flows. Other types of cluster administrators received authorization failure. Now, any cluster administrator is able to view network flows. ( NETOBSERV-972 ) Previously, a bug prevented users from setting spec.consolePlugin.portNaming.enable to false . Now, this setting can be set to false to disable port-to-service name translation. ( NETOBSERV-971 ) Previously, the metrics exposed by the console plugin were not collected by the Cluster Monitoring Operator (Prometheus), due to an incorrect configuration. Now the configuration has been fixed so that the console plugin metrics are correctly collected and accessible from the OpenShift Container Platform web console. ( NETOBSERV-765 ) Previously, when processor.metrics.tls was set to AUTO in the FlowCollector , the flowlogs-pipeline servicemonitor did not adapt the appropriate TLS scheme, and metrics were not visible in the web console. Now the issue is fixed for AUTO mode. ( NETOBSERV-1070 ) Previously, certificate configuration, such as used for Kafka and Loki, did not allow specifying a namespace field, implying that the certificates had to be in the same namespace where Network Observability is deployed. Moreover, when using Kafka with TLS/mTLS, the user had to manually copy the certificate(s) to the privileged namespace where the eBPF agent pods are deployed and manually manage certificate updates, such as in the case of certificate rotation. Now, Network Observability setup is simplified by adding a namespace field for certificates in the FlowCollector resource. As a result, users can now install Loki or Kafka in different namespaces without needing to manually copy their certificates in the Network Observability namespace. The original certificates are watched so that the copies are automatically updated when needed. ( NETOBSERV-773 ) Previously, the SCTP, ICMPv4 and ICMPv6 protocols were not covered by the Network Observability agents, resulting in a less comprehensive network flows coverage. These protocols are now recognized to improve the flows coverage. ( NETOBSERV-934 ) 1.10.5. Known issues When processor.metrics.tls is set to PROVIDED in the FlowCollector , the flowlogs-pipeline servicemonitor is not adapted to the TLS scheme. ( NETOBSERV-1087 ) Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the flowlogs-pipeline pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater.( NETOBSERV-980 ) When you install the Operator, a warning kernel taint can appear. The reason for this error is that the Network Observability eBPF agent has memory constraints that prevent preallocating the entire hashmap table. The Operator eBPF agent sets the BPF_F_NO_PREALLOC flag so that pre-allocation is disabled when the hashmap is too memory expansive. 1.11. Network Observability Operator 1.2.0 The following advisory is available for the Network Observability Operator 1.2.0: RHSA-2023:1817 Network Observability Operator 1.2.0 1.11.1. Preparing for the update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. Until the 1.2 release of the Network Observability Operator, the only channel available was v1.0.x . The 1.2 release of the Network Observability Operator introduces the stable update channel for tracking and receiving updates. You must switch your channel from v1.0.x to stable to receive future Operator updates. The v1.0.x channel is deprecated and planned for removal in a following release. 1.11.2. New features and enhancements 1.11.2.1. Histogram in Traffic Flows view You can now choose to show a histogram bar chart of flows over time. The histogram enables you to visualize the history of flows without hitting the Loki query limit. For more information, see Using the histogram . 1.11.2.2. Conversation tracking You can now query flows by Log Type , which enables grouping network flows that are part of the same conversation. For more information, see Working with conversations . 1.11.2.3. Network Observability health alerts The Network Observability Operator now creates automatic alerts if the flowlogs-pipeline is dropping flows because of errors at the write stage or if the Loki ingestion rate limit has been reached. For more information, see Health dashboards . 1.11.3. Bug fixes Previously, after changing the namespace value in the FlowCollector spec, eBPF agent pods running in the namespace were not appropriately deleted. Now, the pods running in the namespace are appropriately deleted. ( NETOBSERV-774 ) Previously, after changing the caCert.name value in the FlowCollector spec (such as in Loki section), FlowLogs-Pipeline pods and Console plug-in pods were not restarted, therefore they were unaware of the configuration change. Now, the pods are restarted, so they get the configuration change. ( NETOBSERV-772 ) Previously, network flows between pods running on different nodes were sometimes not correctly identified as being duplicates because they are captured by different network interfaces. This resulted in over-estimated metrics displayed in the console plug-in. Now, flows are correctly identified as duplicates, and the console plug-in displays accurate metrics. ( NETOBSERV-755 ) The "reporter" option in the console plug-in is used to filter flows based on the observation point of either source node or destination node. Previously, this option mixed the flows regardless of the node observation point. This was due to network flows being incorrectly reported as Ingress or Egress at the node level. Now, the network flow direction reporting is correct. The "reporter" option filters for source observation point, or destination observation point, as expected. ( NETOBSERV-696 ) Previously, for agents configured to send flows directly to the processor as gRPC+protobuf requests, the submitted payload could be too large and is rejected by the processors' GRPC server. This occurred under very-high-load scenarios and with only some configurations of the agent. The agent logged an error message, such as: grpc: received message larger than max . As a consequence, there was information loss about those flows. Now, the gRPC payload is split into several messages when the size exceeds a threshold. As a result, the server maintains connectivity. ( NETOBSERV-617 ) 1.11.4. Known issue In the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate transition periodically affects the flowlogs-pipeline pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate transition. ( NETOBSERV-980 ) 1.11.5. Notable technical changes Previously, you could install the Network Observability Operator using a custom namespace. This release introduces the conversion webhook which changes the ClusterServiceVersion . Because of this change, all the available namespaces are no longer listed. Additionally, to enable Operator metrics collection, namespaces that are shared with other Operators, like the openshift-operators namespace, cannot be used. Now, the Operator must be installed in the openshift-netobserv-operator namespace. You cannot automatically upgrade to the new Operator version if you previously installed the Network Observability Operator using a custom namespace. If you previously installed the Operator using a custom namespace, you must delete the instance of the Operator that was installed and re-install your operator in the openshift-netobserv-operator namespace. It is important to note that custom namespaces, such as the commonly used netobserv namespace, are still possible for the FlowCollector , Loki, Kafka, and other plug-ins. ( NETOBSERV-907 )( NETOBSERV-956 ) 1.12. Network Observability Operator 1.1.0 The following advisory is available for the Network Observability Operator 1.1.0: RHSA-2023:0786 Network Observability Operator Security Advisory Update The Network Observability Operator is now stable and the release channel is upgraded to v1.1.0 . 1.12.1. Bug fix Previously, unless the Loki authToken configuration was set to FORWARD mode, authentication was no longer enforced, allowing any user who could connect to the OpenShift Container Platform console in an OpenShift Container Platform cluster to retrieve flows without authentication. Now, regardless of the Loki authToken mode, only cluster administrators can retrieve flows. ( BZ#2169468 ) | [
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-hostnetwork namespace: netobserv spec: podSelector: matchLabels: app: netobserv-operator ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: '' policyTypes: - Ingress"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_observability/network-observability-operator-release-notes |
Preface | Preface Red Hat Enterprise Linux (RHEL) minor releases are an aggregation of individual security, enhancement, and bug fix errata. The Red Hat Enterprise Linux 7.9 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/preface |
Chapter 66. Slack Source | Chapter 66. Slack Source Receive messages from a Slack channel. 66.1. Configuration Options The following table summarizes the configuration options available for the slack-source Kamelet: Property Name Description Type Default Example channel * Channel The Slack channel to receive messages from string "#myroom" token * Token The token to access Slack. A Slack app is needed. This app needs to have channels:history and channels:read permissions. The Bot User OAuth Access Token is the kind of token needed. string Note Fields marked with an asterisk (*) are mandatory. 66.2. Dependencies At runtime, the slack-source Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:slack camel:jackson 66.3. Usage This section describes how you can use the slack-source . 66.3.1. Knative Source You can use the slack-source Kamelet as a Knative source by binding it to a Knative object. slack-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: "#myroom" token: "The Token" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 66.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 66.3.1.2. Procedure for using the cluster CLI Save the slack-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f slack-source-binding.yaml 66.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 66.3.2. Kafka Source You can use the slack-source Kamelet as a Kafka source by binding it to a Kafka topic. slack-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: "#myroom" token: "The Token" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 66.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 66.3.2.2. Procedure for using the cluster CLI Save the slack-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f slack-source-binding.yaml 66.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 66.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/slack-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: \"#myroom\" token: \"The Token\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f slack-source-binding.yaml",
"kamel bind slack-source -p \"source.channel=#myroom\" -p \"source.token=The Token\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: slack-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: slack-source properties: channel: \"#myroom\" token: \"The Token\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f slack-source-binding.yaml",
"kamel bind slack-source -p \"source.channel=#myroom\" -p \"source.token=The Token\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/slack-source |
Chapter 22. Creating uprobes with perf | Chapter 22. Creating uprobes with perf 22.1. Creating uprobes at the function level with perf You can use the perf tool to create dynamic tracepoints at arbitrary points in a process or application. These tracepoints can then be used in conjunction with other perf tools such as perf stat and perf record to better understand the process or applications behavior. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Create the uprobe in the process or application you are interested in monitoring at a location of interest within the process or application: Additional resources perf-probe man page on your system Recording and analyzing performance profiles with perf Counting events during process execution with perf stat 22.2. Creating uprobes on lines within a function with perf These tracepoints can then be used in conjunction with other perf tools such as perf stat and perf record to better understand the process or applications behavior. Prerequisites You have the perf user space tool installed as described in Installing perf . You have gotten the debugging symbols for your executable: Note To do this, the debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information, the -g option in GCC. Procedure View the function lines where you can place a uprobe: Output of this command looks similar to: Create the uprobe for the desired function line: 22.3. Perf script output of data recorded over uprobes A common method to analyze data collected using uprobes is using the perf script command to read a perf.data file and display a detailed trace of the recorded workload. In the perf script example output: A uprobe is added to the function isprime() in a program called my_prog a is a function argument added to the uprobe. Alternatively, a could be an arbitrary variable visible in the code scope of where you add your uprobe: | [
"perf probe -x /path/to/executable -a function Added new event: probe_executable:function (on function in /path/to/executable ) You can now use it in all perf tools, such as: perf record -e probe_executable:function -aR sleep 1",
"objdump -t ./your_executable | head",
"perf probe -x ./your_executable -L main",
"<main@/home/ user / my_executable :0> 0 int main(int argc, const char **argv) 1 { int err; const char *cmd; char sbuf[STRERR_BUFSIZE]; /* libsubcmd init */ 7 exec_cmd_init(\"perf\", PREFIX, PERF_EXEC_PATH, EXEC_PATH_ENVIRONMENT); 8 pager_init(PERF_PAGER_ENVIRONMENT);",
"perf probe -x ./ my_executable main:8 Added new event: probe_my_executable:main_L8 (on main:8 in /home/user/my_executable) You can now use it in all perf tools, such as: perf record -e probe_my_executable:main_L8 -aR sleep 1",
"perf script my_prog 1367 [007] 10802159.906593: probe_my_prog:isprime: (400551) a=2 my_prog 1367 [007] 10802159.906623: probe_my_prog:isprime: (400551) a=3 my_prog 1367 [007] 10802159.906625: probe_my_prog:isprime: (400551) a=4 my_prog 1367 [007] 10802159.906627: probe_my_prog:isprime: (400551) a=5 my_prog 1367 [007] 10802159.906629: probe_my_prog:isprime: (400551) a=6 my_prog 1367 [007] 10802159.906631: probe_my_prog:isprime: (400551) a=7 my_prog 1367 [007] 10802159.906633: probe_my_prog:isprime: (400551) a=13 my_prog 1367 [007] 10802159.906635: probe_my_prog:isprime: (400551) a=17 my_prog 1367 [007] 10802159.906637: probe_my_prog:isprime: (400551) a=19"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/creating-uprobes-with-perf_monitoring-and-managing-system-status-and-performance |
Chapter 7. opm CLI | Chapter 7. opm CLI 7.1. Installing the opm CLI 7.1.1. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. Additional resources See Operator Framework packaging format for more information about the bundle format. To create a bundle image using the Operator SDK, see Working with bundle images . 7.1.2. Installing the opm CLI You can install the opm CLI tool on your Linux, macOS, or Windows workstation. Prerequisites For Linux, you must provide the following packages. RHEL 8 meets these requirements: podman version 1.9.3+ (version 2.0+ recommended) glibc version 2.28+ Procedure Navigate to the OpenShift mirror site and download the latest version of the tarball that matches your operating system. Unpack the archive. For Linux or macOS: USD tar xvf <file> For Windows, unzip the archive with a ZIP program. Place the file anywhere in your PATH . For Linux or macOS: Check your PATH : USD echo USDPATH Move the file. For example: USD sudo mv ./opm /usr/local/bin/ For Windows: Check your PATH : C:\> path Move the file: C:\> move opm.exe <directory> Verification After you install the opm CLI, verify that it is available: USD opm version 7.1.3. Additional resources See Managing custom catalogs for opm procedures including creating, updating, and pruning catalogs. 7.2. opm CLI reference The opm command-line interface (CLI) is a tool for creating and maintaining Operator catalogs. opm CLI syntax USD opm <command> [<subcommand>] [<argument>] [<flags>] Table 7.1. Global flags Flag Description -skip-tls-verify Skip TLS certificate verification for container image registries while pulling bundles or indexes. --use-http When you pull bundles, use plain HTTP for container image registries. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 7.2.1. generate Generate various artifacts for declarative config indexes. Command syntax USD opm generate <subcommand> [<flags>] Table 7.2. generate subcommands Subcommand Description dockerfile Generate a Dockerfile for a declarative config index. Table 7.3. generate flags Flags Description -h , --help Help for generate. 7.2.1.1. dockerfile Generate a Dockerfile for a declarative config index. Important This command creates a Dockerfile in the same directory as the <dcRootDir> (named <dcDirName>.Dockerfile ) that is used to build the index. If a Dockerfile with the same name already exists, this command fails. When specifying extra labels, if duplicate keys exist, only the last value of each duplicate key gets added to the generated Dockerfile. Command syntax USD opm generate dockerfile <dcRootDir> [<flags>] Table 7.4. generate dockerfile flags Flag Description -i, --binary-image (string) Image in which to build catalog. The default value is quay.io/operator-framework/opm:latest . -l , --extra-labels (string) Extra labels to include in the generated Dockerfile. Labels have the form key=value . -h , --help Help for Dockerfile. Note To build with the official Red Hat image, use the registry.redhat.io/openshift4/ose-operator-registry:v4.14 value with the -i flag. 7.2.2. index Generate Operator index for SQLite database format container images from pre-existing Operator bundles. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see "Additional resources". Command syntax USD opm index <subcommand> [<flags>] Table 7.5. index subcommands Subcommand Description add Add Operator bundles to an index. prune Prune an index of all but specified packages. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. rm Delete an entire Operator from an index. 7.2.2.1. add Add Operator bundles to an index. Command syntax USD opm index add [<flags>] Table 7.6. index add flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -b , --bundles (strings) Comma-separated list of bundles to add. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to add to. --generate If enabled, only creates the Dockerfile and saves it to local disk. --mode (string) Graph update mode that defines how channel graphs are updated: replaces (the default value), semver , or semver-skippatch . -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 7.2.2.2. prune Prune an index of all but specified packages. Command syntax USD opm index prune [<flags>] Table 7.7. index prune flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 7.2.2.3. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. Command syntax USD opm index prune-stranded [<flags>] Table 7.8. index prune-stranded flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 7.2.2.4. rm Delete an entire Operator from an index. Command syntax USD opm index rm [<flags>] Table 7.9. index rm flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to delete from. --generate If enabled, only creates the Dockerfile and saves it to local disk. -o , --operators (strings) Comma-separated list of Operators to delete. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. Additional resources Operator Framework packaging format Managing custom catalogs Mirroring images for a disconnected installation using the oc-mirror plugin 7.2.3. init Generate an olm.package declarative config blob. Command syntax USD opm init <package_name> [<flags>] Table 7.10. init flags Flag Description -c , --default-channel (string) The channel that subscriptions will default to if unspecified. -d , --description (string) Path to the Operator's README.md or other documentation. -i , --icon (string) Path to package's icon. -o , --output (string) Output format: json (the default value) or yaml . 7.2.4. migrate Migrate a SQLite database format index image or database file to a file-based catalog. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Command syntax USD opm migrate <index_ref> <output_dir> [<flags>] Table 7.11. migrate flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 7.2.5. render Generate a declarative config blob from the provided index images, bundle images, and SQLite database files. Command syntax USD opm render <index_image | bundle_image | sqlite_file> [<flags>] Table 7.12. render flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 7.2.6. serve Serve declarative configs via a GRPC server. Note The declarative config directory is loaded by the serve command at startup. Changes made to the declarative config after this command starts are not reflected in the served content. Command syntax USD opm serve <source_path> [<flags>] Table 7.13. serve flags Flag Description --cache-dir (string) If this flag is set, it syncs and persists the server cache directory. --cache-enforce-integrity Exits with an error if the cache is not present or is invalidated. The default value is true when the --cache-dir flag is set and the --cache-only flag is false . Otherwise, the default is false . --cache-only Syncs the serve cache and exits without serving. --debug Enables debug logging. h , --help Help for serve. -p , --port (string) The port number for the service. The default value is 50051 . --pprof-addr (string) The address of the startup profiling endpoint. The format is Addr:Port . -t , --termination-log (string) The path to a container termination log file. The default value is /dev/termination-log . 7.2.7. validate Validate the declarative config JSON file(s) in a given directory. Command syntax USD opm validate <directory> [<flags>] | [
"tar xvf <file>",
"echo USDPATH",
"sudo mv ./opm /usr/local/bin/",
"C:\\> path",
"C:\\> move opm.exe <directory>",
"opm version",
"opm <command> [<subcommand>] [<argument>] [<flags>]",
"opm generate <subcommand> [<flags>]",
"opm generate dockerfile <dcRootDir> [<flags>]",
"opm index <subcommand> [<flags>]",
"opm index add [<flags>]",
"opm index prune [<flags>]",
"opm index prune-stranded [<flags>]",
"opm index rm [<flags>]",
"opm init <package_name> [<flags>]",
"opm migrate <index_ref> <output_dir> [<flags>]",
"opm render <index_image | bundle_image | sqlite_file> [<flags>]",
"opm serve <source_path> [<flags>]",
"opm validate <directory> [<flags>]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/cli_tools/opm-cli |
32.2.4. Testing the Configuration | 32.2.4. Testing the Configuration Warning The commands below will cause the kernel to crash. Use caution when following these steps, and by no means use them on a production machine. To test the configuration, reboot the system with kdump enabled, and make sure that the service is running (see Section 12.3, "Running Services" for more information on how to run a service in Red Hat Enterprise Linux): Then type the following commands at a shell prompt: This will force the Linux kernel to crash, and the address - YYYY-MM-DD - HH:MM:SS /vmcore file will be copied to the location you have selected in the configuration (that is, to /var/crash/ by default). | [
"~]# service kdump status Kdump is operational",
"echo 1 > /proc/sys/kernel/sysrq echo c > /proc/sysrq-trigger"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-kdump-configuration-testing |
7.10. Additional Resources | 7.10. Additional Resources For more information about the Audit system, see the following sources. Online Sources The Linux Audit system project page: http://people.redhat.com/sgrubb/audit/ . Article Investigating kernel Return Codes with the Linux Audit System in the Hack In the Box magazine: http://magazine.hackinthebox.org/issues/HITB-Ezine-Issue-005.pdf . Installed Documentation Documentation provided by the audit package can be found in the /usr/share/doc/audit- version / directory. Manual Pages audispd.conf (5) auditd.conf (5) ausearch-expression (5) audit.rules (7) audispd (8) auditctl (8) auditd (8) aulast (8) aulastlog (8) aureport (8) ausearch (8) ausyscall (8) autrace (8) auvirt (8) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-audit_additional_resources |
Chapter 79. KafkaAutoRebalanceStatusBrokers schema reference | Chapter 79. KafkaAutoRebalanceStatusBrokers schema reference Used in: KafkaAutoRebalanceStatus Property Property type Description mode string (one of [remove-brokers, add-brokers]) Mode for which there is an auto-rebalancing operation in progress or queued, when brokers are added or removed. The possible modes are add-brokers and remove-brokers . brokers integer array List of broker IDs involved in an auto-rebalancing operation related to the current mode. The list contains one of the following: Broker IDs for a current auto-rebalance. Broker IDs for a queued auto-rebalance (if a auto-rebalance is still in progress). | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaAutoRebalanceStatusBrokers-reference |
Monitoring Ceph with Datadog Guide | Monitoring Ceph with Datadog Guide Red Hat Ceph Storage 4 Guide on Monitoring Ceph with Datadog Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/monitoring_ceph_with_datadog_guide/index |
Chapter 2. Using sos reports | Chapter 2. Using sos reports You can use the sos tool to collect troubleshooting information about a host. The sos report command generates a detailed report that shows all of the enabled plugins and data from the different components and applications in a system. 2.1. About sos reports The sos tool is composed of different plugins that help you gather information from different applications. A MicroShift-specific plugin has been added from sos version 4.5.1, and it can gather the following data: MicroShift configuration and version YAML output for cluster-wide and system namespaced resources OVN-Kubernetes information 2.2. Gathering data from an sos report Prerequisites You must have the sos package installed. Procedure Log into the failing host as a root user. Perform the debug report creation procedure by running the following command: USD microshift-sos-report Example output sosreport (version 4.5.1) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.o0sznf_8 and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://www.access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... Running plugins. Please wait ... Starting 1/2 microshift [Running: microshift] Starting 2/2 microshift_ovn [Running: microshift microshift_ovn] Finishing plugins [Running: microshift] Finished running plugins Found 1 total reports to obfuscate, processing up to 4 concurrently sosreport-microshift-rhel9-2023-03-31-axjbyxw : Beginning obfuscation... sosreport-microshift-rhel9-2023-03-31-axjbyxw : Obfuscation completed Successfully obfuscated 1 report(s) Creating compressed archive... A mapping of obfuscated elements is available at /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-private_map Your sosreport has been generated and saved in: /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-obfuscated.tar.xz Size 444.14KiB Owner root sha256 922e5ff2db25014585b7c6c749d2c44c8492756d619df5e9838ce863f83d4269 Please send this file to your support representative. 2.3. Additional resources How to provide files to Red Hat Support (vmcore, rhev logcollector, sosreports, heap dumps, log files, etc. What is an sos report and how to create one in Red Hat Enterprise Linux (RHEL)? | [
"microshift-sos-report",
"sosreport (version 4.5.1) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.o0sznf_8 and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://www.access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive Setting up plugins Running plugins. Please wait Starting 1/2 microshift [Running: microshift] Starting 2/2 microshift_ovn [Running: microshift microshift_ovn] Finishing plugins [Running: microshift] Finished running plugins Found 1 total reports to obfuscate, processing up to 4 concurrently sosreport-microshift-rhel9-2023-03-31-axjbyxw : Beginning obfuscation sosreport-microshift-rhel9-2023-03-31-axjbyxw : Obfuscation completed Successfully obfuscated 1 report(s) Creating compressed archive A mapping of obfuscated elements is available at /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-private_map Your sosreport has been generated and saved in: /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-obfuscated.tar.xz Size 444.14KiB Owner root sha256 922e5ff2db25014585b7c6c749d2c44c8492756d619df5e9838ce863f83d4269 Please send this file to your support representative."
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/support/microshift-sos-report |
Chapter 12. Example: Configuring OVS-DPDK and SR-IOV with VXLAN tunnelling | Chapter 12. Example: Configuring OVS-DPDK and SR-IOV with VXLAN tunnelling You can deploy Compute nodes with both OVS-DPDK and SR-IOV interfaces. The cluster includes ML2/OVS and VXLAN tunnelling. Important In your roles configuration file, for example roles_data.yaml , comment out or remove the line that contains OS::TripleO::Services::Tuned , when you generate the overcloud roles. ServicesDefault: # - OS::TripleO::Services::Tuned When you have commented out or removed OS::TripleO::Services::Tuned , you can set the TunedProfileName parameter to suit your requirements, for example "cpu-partitioning" . If you do not comment out or remove the line OS::TripleO::Services::Tuned and redeploy, the TunedProfileName parameter gets the default value of "throughput-performance" , instead of any other value that you set. 12.1. Configuring roles data Red Hat OpenStack Platform provides a set of default roles in the roles_data.yaml file. You can create your own roles_data.yaml file to support the roles you require. For the purposes of this example, the ComputeOvsDpdkSriov role is created. Additional resources Composable services and custom roles in the Advanced Overcloud Customization guide roles-data.yaml 12.2. Configuring OVS-DPDK parameters Note You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. For more details, see Deriving DPDK parameters with workflows . Add the custom resources for OVS-DPDK under resource_registry : resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkSriov::Net::SoftwareConfig: nic-configs/computeovsdpdksriov.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml Under parameter_defaults , set the tunnel type to vxlan , and the network type to vxlan,vlan : NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan,vlan' Under parameters_defaults , set the bridge mapping: # The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: - dpdk-mgmt:br-link0 Under parameter_defaults , set the role-specific parameters for the ComputeOvsDpdkSriov role: ########################## # OVS DPDK configuration # ########################## ComputeOvsDpdkSriovParameters: KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39" TunedProfileName: "cpu-partitioning" IsolCpusList: "2-19,22-39" NovaComputeCpuDedicatedSet: ['4-19,24-39'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: "3072,1024" OvsDpdkMemoryChannels: "4" OvsPmdCoreList: "2,22,3,23" NovaComputeCpuSharedSet: [0,20,1,21] NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024 Note To prevent failures during guest creation, assign at least one CPU with sibling thread on each NUMA node. In the example, the values for the OvsPmdCoreList parameter denote cores 2 and 22 from NUMA 0, and cores 3 and 23 from NUMA 1. Note These huge pages are consumed by the virtual machines, and also by OVS-DPDK using the OvsDpdkSocketMemory parameter as shown in this procedure. The number of huge pages available for the virtual machines is the boot parameter minus the OvsDpdkSocketMemory . You must also add hw:mem_page_size=1GB to the flavor you associate with the DPDK instance. Note OvsDpdkMemoryChannels is a required setting for this procedure. For optimum operation, ensure you deploy DPDK with appropriate parameters and values. Configure the role-specific parameters for SR-IOV: NovaPCIPassthrough: - vendor_id: "8086" product_id: "1528" address: "0000:06:00.0" trusted: "true" physical_network: "sriov-1" - vendor_id: "8086" product_id: "1528" address: "0000:06:00.1" trusted: "true" physical_network: "sriov-2" 12.3. Configuring the controller node Create the control-plane Linux bond for an isolated network. - type: linux_bond name: bond_api bonding_options: "mode=active-backup" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic2 primary: true Assign VLANs to this Linux bond. - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet - type: vlan vlan_id: get_param: StorageMgmtNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageMgmtIpSubnet - type: vlan vlan_id: get_param: ExternalNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: ExternalIpSubnet routes: - default: true next_hop: get_param: ExternalInterfaceDefaultRoute Create the OVS bridge to access neutron-dhcp-agent and neutron-metadata-agent services. - type: ovs_bridge name: br-link0 use_dhcp: false mtu: 9000 members: - type: interface name: nic3 mtu: 9000 - type: vlan vlan_id: get_param: TenantNetworkVlanID mtu: 9000 addresses: - ip_netmask: get_param: TenantIpSubnet 12.4. Configuring the Compute node for DPDK and SR-IOV Create the computeovsdpdksriov.yaml file from the default compute.yaml file, and make the following changes: Create the control-plane Linux bond for an isolated network. - type: linux_bond name: bond_api bonding_options: "mode=active-backup" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 Assign VLANs to this Linux bond. - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet Set a bridge with a DPDK port to link to the controller. - type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic7 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic8 Note To include multiple DPDK devices, repeat the type code section for each DPDK device that you want to add. Note When using OVS-DPDK, all bridges on the same Compute node must be of type ovs_user_bridge . Red Hat OpenStack Platform does not support both ovs_bridge and ovs_user_bridge located on the same node. 12.5. Deploying the overcloud Run the overcloud_deploy.sh script: | [
"ServicesDefault: - OS::TripleO::Services::Tuned",
"resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkSriov::Net::SoftwareConfig: nic-configs/computeovsdpdksriov.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml",
"NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan,vlan'",
"The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: - dpdk-mgmt:br-link0",
"########################## # OVS DPDK configuration # ########################## ComputeOvsDpdkSriovParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: \"2-19,22-39\" NovaComputeCpuDedicatedSet: ['4-19,24-39'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"3072,1024\" OvsDpdkMemoryChannels: \"4\" OvsPmdCoreList: \"2,22,3,23\" NovaComputeCpuSharedSet: [0,20,1,21] NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024",
"NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1528\" address: \"0000:06:00.0\" trusted: \"true\" physical_network: \"sriov-1\" - vendor_id: \"8086\" product_id: \"1528\" address: \"0000:06:00.1\" trusted: \"true\" physical_network: \"sriov-2\"",
"- type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic2 primary: true",
"- type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet - type: vlan vlan_id: get_param: StorageMgmtNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageMgmtIpSubnet - type: vlan vlan_id: get_param: ExternalNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: ExternalIpSubnet routes: - default: true next_hop: get_param: ExternalInterfaceDefaultRoute",
"- type: ovs_bridge name: br-link0 use_dhcp: false mtu: 9000 members: - type: interface name: nic3 mtu: 9000 - type: vlan vlan_id: get_param: TenantNetworkVlanID mtu: 9000 addresses: - ip_netmask: get_param: TenantIpSubnet",
"- type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4",
"- type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet",
"- type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic7 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic8"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/example-config-dpdk-sriov-vxlan_rhosp-nfv |
Chapter 25. Automatic Bug Reporting Tool (ABRT) | Chapter 25. Automatic Bug Reporting Tool (ABRT) 25.1. Introduction to ABRT The Automatic Bug Reporting Tool , commonly abbreviated as ABRT , is a set of tools that is designed to help users detect and report application crashes. Its main purpose is to ease the process of reporting issues and finding solutions. In this context, the solution can be a Bugzilla ticket, a knowledge-base article, or a suggestion to update a package to a version containing a fix. ABRT consists of the abrtd daemon and a number of system services and utilities for processing, analyzing, and reporting detected problems. The daemon runs silently in the background most of the time and springs into action when an application crashes or a kernel oops is detected. The daemon then collects the relevant problem data, such as a core file if there is one, the crashing application's command line parameters, and other data of forensic utility. ABRT currently supports the detection of crashes in applications written in the C, C++, Java, Python, and Ruby programming languages, as well as X.Org crashes, kernel oopses, and kernel panics. See Section 25.4, "Detecting Software Problems" for more detailed information on the types of failures and crashes supported, and the way the various types of crashes are detected. The identified problems can be reported to a remote issue tracker, and the reporting can be configured to happen automatically whenever an issue is detected. Problem data can also be stored locally or on a dedicated system and reviewed, reported, and deleted manually by the user. The reporting tools can send problem data to a Bugzilla database or the Red Hat Technical Support (RHTSupport) website. The tools can also upload it using FTP or SCP , send it as an email, or write it to a file. The ABRT component that handles existing problem data (as opposed to, for example, the creation of new problem data) is a part of a separate project, libreport . The libreport library provides a generic mechanism for analyzing and reporting problems, and it is used by applications other than ABRT as well. However, ABRT and libreport operation and configuration are closely integrated. They are, therefore, discussed as one in this document. Note Note that ABRT report is generated only when core dump is generated. Core dump is generated only for some signals. For example, SIGKILL (-9) does not generate core dump, so ABRT cannot catch this fail. For more information about signals and core dump generating, see man 7 signal. 25.2. Installing ABRT and Starting its Services In order to use ABRT , ensure that the abrt-desktop or the abrt-cli package is installed on your system. The abrt-desktop package provides a graphical user interface for ABRT , and the abrt-cli package contains a tool for using ABRT on the command line. You can also install both. The general workflow with both the ABRT GUI and the command line tool is procedurally similar and follows the same pattern. Warning Please note that installing the ABRT packages overwrites the /proc/sys/kernel/core_pattern file, which can contain a template used to name core-dump files. The content of this file will be overwritten to: See Section 9.2.4, "Installing Packages" for general information on how to install packages using the Yum package manager. 25.2.1. Installing the ABRT GUI The ABRT graphical user interface provides an easy-to-use front end for working in a desktop environment. You can install the required package by running the following command as the root user: Upon installation, the ABRT notification applet is configured to start automatically when your graphical desktop session starts. You can verify that the ABRT applet is running by issuing the following command in a terminal: If the applet is not running, you can start it manually in your current desktop session by running the abrt-applet program: 25.2.2. Installing ABRT for the Command Line The command line interface is useful on headless machines, remote systems connected over a network, or in scripts. You can install the required package by running the following command as the root user: 25.2.3. Installing Supplementary ABRT Tools To receive email notifications about crashes detected by ABRT , you need to have the libreport-plugin-mailx package installed. You can install it by executing the following command as root : By default, it sends notifications to the root user at the local machine. The email destination can be configured in the /etc/libreport/plugins/mailx.conf file. To have notifications displayed in your console at login time, install the abrt-console-notification package as well. ABRT can detect, analyze, and report various types of software failures. By default, ABRT is installed with support for the most common types of failures, such as crashes of C and C++ applications. Support for other types of failures is provided by independent packages. For example, to install support for detecting exceptions in applications written using the Java language, run the following command as root : See Section 25.4, "Detecting Software Problems" for a list of languages and software projects which ABRT supports. The section also includes a list of all corresponding packages that enable the detection of the various types of failures. 25.2.4. Starting the ABRT Services The abrtd daemon requires the abrt user to exist for file system operations in the /var/spool/abrt directory. When the abrt package is installed, it automatically creates the abrt user whose UID and GID is 173, if such user does not already exist. Otherwise, the abrt user can be created manually. In that case, any UID and GID can be chosen, because abrtd does not require a specific UID and GID. The abrtd daemon is configured to start at boot time. You can use the following command to verify its current status: If systemctl returns inactive or unknown , the daemon is not running. You can start it for the current session by entering the following command as root : You can use the same commands to start or check status of related error-detection services. For example, make sure the abrt-ccpp service is running if you want ABRT to detect C or C++ crashes. See Section 25.4, "Detecting Software Problems" for a list of all available ABRT detection services and their respective packages. With the exception of the abrt-vmcore and abrt-pstoreoops services, which are only started when a kernel panic or kernel oops occurs, all ABRT services are automatically enabled and started at boot time when their respective packages are installed. You can disable or enable any ABRT service by using the systemctl utility as described in Chapter 10, Managing Services with systemd . 25.2.5. Testing ABRT Crash Detection To test that ABRT works properly, use the kill command to send the SEGV signal to terminate a process. For example, start a sleep process and terminate it with the kill command in the following way: ABRT detects a crash shortly after executing the kill command, and, provided a graphical session is running, the user is notified of the detected problem by the GUI notification applet. On the command line, you can check that the crash was detected by running the abrt-cli list command or by examining the crash dump created in the /var/spool/abrt/ directory. See Section 25.5, "Handling Detected Problems" for more information on how to work with detected crashes. 25.3. Configuring ABRT A problem life cycle is driven by events in ABRT . For example: Event #1 - a problem-data directory is created. Event #2 - problem data is analyzed. Event #3 - the problem is reported to Bugzilla. Whenever a problem is detected, ABRT compares it with all existing problem data and determines whether that same problem has already been recorded. If it has, the existing problem data is updated, and the most recent (duplicate) problem is not recorded again. If the problem is not recognized by ABRT , a problem-data directory is created. A problem-data directory typically consists of files such as: analyzer , architecture , coredump , cmdline , executable , kernel , os_release , reason , time , and uid . Other files, such as backtrace , can be created during the analysis of the problem, depending on which analyzer method is used and its configuration settings. Each of these files holds specific information about the system and the problem itself. For example, the kernel file records the version of a crashed kernel. After the problem-data directory is created and problem data gathered, you can process the problem using either the ABRT GUI, or the abrt-cli utility for the command line. See Section 25.5, "Handling Detected Problems" for more information about the ABRT tools provided for working with recorded problems. 25.3.1. Configuring Events ABRT events use plugins to carry out the actual reporting operations. Plugins are compact utilities that the events call to process the content of problem-data directories. Using plugins, ABRT is capable of reporting problems to various destinations, and almost every reporting destination requires some configuration. For instance, Bugzilla requires a user name, password, and a URL pointing to an instance of the Bugzilla service. Some configuration details can have default values (such as a Bugzilla URL), but others cannot have sensible defaults (for example, a user name). ABRT looks for these settings in configuration files, such as report_Bugzilla.conf , in the /etc/libreport/events/ or USDHOME/.cache/abrt/events/ directories for system-wide or user-specific settings respectively. The configuration files contain pairs of directives and values. These files are the bare minimum necessary for running events and processing the problem-data directories. The gnome-abrt and abrt-cli tools read the configuration data from these files and pass it to the events they run. Additional information about events (such as their description, names, types of parameters that can be passed to them as environment variables, and other properties) is stored in event_name .xml files in the /usr/share/libreport/events/ directory. These files are used by both gnome-abrt and abrt-cli to make the user interface more friendly. Do not edit these files unless you want to modify the standard installation. If you intend to do that, copy the file to be modified to the /etc/libreport/events/ directory and modify the new file. These files can contain the following information: a user-friendly event name and description (Bugzilla, Report to Bugzilla bug tracker), a list of items in a problem-data directory that are required for the event to succeed, a default and mandatory selection of items to send or not send, whether the GUI should prompt for data review, what configuration options exist, their types (string, Boolean, and so on), default value, prompt string, and so on; this lets the GUI build appropriate configuration dialogs. For example, the report_Logger event accepts an output filename as a parameter. Using the respective event_name .xml file, the ABRT GUI determines which parameters can be specified for a selected event and allows the user to set the values for these parameters. The values are saved by the ABRT GUI and reused on subsequent invocations of these events. Note that the ABRT GUI saves configuration options using the GNOME Keyring tool and by passing them to events, it overrides data from text configuration files. To open the graphical Configuration window, choose Automatic Bug Reporting Tool Preferences from within a running instance of the gnome-abrt application. This window shows a list of events that can be selected during the reporting process when using the GUI . When you select one of the configurable events, you can click the Configure button and modify the settings for that event. Figure 25.1. Configuring ABRT Events Important All files in the /etc/libreport/ directory hierarchy are world-readable and are meant to be used as global settings. Thus, it is not advisable to store user names, passwords, or any other sensitive data in them. The per-user settings (set in the GUI application and readable by the owner of USDHOME only) are safely stored in GNOME Keyring , or they can be stored in a text configuration file in USDHOME/.abrt/ for use with abrt-cli . The following table shows a selection of the default analyzing, collecting, and reporting events provided by the standard installation of ABRT . The table lists each event's name, identifier, configuration file from the /etc/libreport/events.d/ directory, and a brief description. Note that while the configuration files use the event identifiers, the ABRT GUI refers to the individual events using their names. Note also that not all of the events can be set up using the GUI . For information on how to define a custom event, see Section 25.3.2, "Creating Custom Events" . Table 25.1. Standard ABRT Events Name Identifier and Configuration File Description uReport report_uReport Uploads a mReport to the FAF server. Mailx report_Mailx mailx_event.conf Sends the problem report via the Mailx utility to a specified email address. Bugzilla report_Bugzilla bugzilla_event.conf Reports the problem to the specified installation of the Bugzilla bug tracker. Red Hat Customer Support report_RHTSupport rhtsupport_event.conf Reports the problem to the Red Hat Technical Support system. Analyze C or C++ Crash analyze_CCpp ccpp_event.conf Sends the core dump to a remote retrace server for analysis or performs a local analysis if the remote one fails. Report uploader report_Uploader uploader_event.conf Uploads a tarball ( .tar.gz ) archive with problem data to the chosen destination using the FTP or the SCP protocol. Analyze VM core analyze_VMcore vmcore_event.conf Runs the GDB (the GNU debugger) on the problem data of a kernel oops and generates a backtrace of the kernel. Local GNU Debugger analyze_LocalGDB ccpp_event.conf Runs GDB (the GNU debugger) on the problem data of an application and generates a backtrace of the program. Collect .xsession-errors analyze_xsession_errors ccpp_event.conf Saves relevant lines from the ~/.xsession-errors file to the problem report. Logger report_Logger print_event.conf Creates a problem report and saves it to a specified local file. Kerneloops.org report_Kerneloops koops_event.conf Sends a kernel problem to the oops tracker at kerneloops.org. 25.3.2. Creating Custom Events Each event is defined by one rule structure in a respective configuration file. The configuration files are typically stored in the /etc/libreport/events.d/ directory. These configuration files are loaded by the main configuration file, /etc/libreport/report_event.conf . There is no need to edit the default configuration files because abrt will run the scripts contained in /etc/libreport/events.d/ . This file accepts shell metacharacters (for example, *, USD, ?) and interprets relative paths relatively to its location. Each rule starts with a line with a non-space leading character, and all subsequent lines starting with the space character or the tab character are considered a part of this rule. Each rule consists of two parts, a condition part and a program part. The condition part contains conditions in one of the following forms: VAR = VAL VAR != VAL VAL ~= REGEX where: VAR is either the EVENT key word or a name of a problem-data directory element (such as executable , package , hostname , and so on), VAL is either a name of an event or a problem-data element, and REGEX is a regular expression. The program part consists of program names and shell-interpretable code. If all conditions in the condition part are valid, the program part is run in the shell. The following is an event example: This event would overwrite the contents of the /tmp/dt file with the current date and time and print the host name of the machine and its kernel version on the standard output. Here is an example of a more complex event, which is actually one of the predefined events. It saves relevant lines from the ~/.xsession-errors file to the problem report of any problem for which the abrt-ccpp service has been used, provided the crashed application had any X11 libraries loaded at the time of the crash: The set of possible events is not definitive. System administrators can add events according to their need in the /etc/libreport/events.d/ directory. Currently, the following event names are provided with the standard ABRT and libreport installations: post-create This event is run by abrtd to process newly created problem-data directories. When the post-create event is run, abrtd checks whether the new problem data matches any of the already existing problem directories. If such a problem directory exists, it is updated and the new problem data is discarded. Note that if the script in any definition of the post-create event exits with a non-zero value, abrtd will terminate the process and will drop the problem data. notify , notify-dup The notify event is run following the completion of post-create . When the event is run, the user can be sure that the problem deserves their attention. The notify-dup is similar, except it is used for duplicate occurrences of the same problem. analyze_ name_suffix where name_suffix is the replaceable part of the event name. This event is used to process collected data. For example, the analyze_LocalGDB event uses the GNU Debugger ( GDB ) utility to process the core dump of an application and produce a backtrace of the crash. collect_ name_suffix ...where name_suffix is the adjustable part of the event name. This event is used to collect additional information on problems. report_ name_suffix ...where name_suffix is the adjustable part of the event name. This event is used to report a problem. 25.3.3. Setting Up Automatic Reporting ABRT can be configured to send initial anonymous reports, or mReports , of any detected issues or crashes automatically without any user interaction. When automatic reporting is turned on, the so called mReport, which is normally sent at the beginning of the crash-reporting process, is sent immediately after a crash is detected. This prevents duplicate support cases based on identical crashes. To enable the autoreporting feature, issue the following command as root : The above command sets the AutoreportingEnabled directive in the /etc/abrt/abrt.conf configuration file to yes . This system-wide setting applies to all users of the system. Note that by enabling this option, automatic reporting will also be enabled in the graphical desktop environment. To only enable autoreporting in the ABRT GUI, switch the Automatically send uReport option to YES in the Problem Reporting Configuration window. To open this window, choose Automatic Bug Reporting Tool ABRT Configuration from within a running instance of the gnome-abrt application. To launch the application, go to Applications Sundry Automatic Bug Reporting Tool . Figure 25.2. Configuring ABRT Problem Reporting Upon detection of a crash, by default, ABRT submits a mReport with basic information about the problem to Red Hat's ABRT server. The server determines whether the problem is known and either provides a short description of the problem along with a URL of the reported case if known, or invites the user to report it if not known. Note A mReport (microreport) is a JSON object representing a problem, such as a binary crash or a kernel oops. These reports are designed to be brief, machine readable, and completely anonymous, which is why they can be used for automated reporting. The mReports make it possible to keep track of bug occurrences, but they usually do not provide enough information for engineers to fix the bug. A full bug report is needed for a support case to be opened. To change the default behavior of the autoreporting facility from sending a mReport, modify the value of the AutoreportingEvent directive in the /etc/abrt/abrt.conf configuration file to point to a different ABRT event. See Table 25.1, "Standard ABRT Events" for an overview of the standard events. 25.4. Detecting Software Problems ABRT is capable of detecting, analyzing, and processing crashes in applications written in a variety of different programming languages. Many of the packages that contain the support for detecting the various types of crashes are installed automatically when either one of the main ABRT packages ( abrt-desktop , abrt-cli ) is installed. See Section 25.2, "Installing ABRT and Starting its Services" for instructions on how to install ABRT . See the table below for a list of the supported types of crashes and the respective packages. Table 25.2. Supported Programming Languages and Software Projects Langauge/Project Package C or C++ abrt-addon-ccpp Python abrt-addon-python Ruby rubygem-abrt Java abrt-java-connector X.Org abrt-addon-xorg Linux (kernel oops) abrt-addon-kerneloops Linux (kernel panic) abrt-addon-vmcore Linux (persistent storage) abrt-addon-pstoreoops 25.4.1. Detecting C and C++ Crashes The abrt-ccpp service installs its own core-dump handler, which, when started, overrides the default value of the kernel's core_pattern variable, so that C and C++ crashes are handled by abrtd . If you stop the abrt-ccpp service, the previously specified value of core_pattern is reinstated. By default, the /proc/sys/kernel/core_pattern file contains the string core , which means that the kernel produces files with the core. prefix in the current directory of the crashed process. The abrt-ccpp service overwrites the core_pattern file to contain the following command: This command instructs the kernel to pipe the core dump to the abrt-hook-ccpp program, which stores it in ABRT 's dump location and notifies the abrtd daemon of the new crash. It also stores the following files from the /proc/ PID / directory (where PID is the ID of the crashed process) for debugging purposes: maps , limits , cgroup , status . See proc (5) for a description of the format and the meaning of these files. 25.4.2. Detecting Python Exceptions The abrt-addon-python package installs a custom exception handler for Python applications. The Python interpreter then automatically imports the abrt.pth file installed in /usr/lib64/python2.7/site-packages/ , which in turn imports abrt_exception_handler.py . This overrides Python's default sys.excepthook with a custom handler, which forwards unhandled exceptions to abrtd via its Socket API. To disable the automatic import of site-specific modules, and thus prevent the ABRT custom exception handler from being used when running a Python application, pass the -S option to the Python interpreter: In the above command, replace file.py with the name of the Python script you want to execute without the use of site-specific modules. 25.4.3. Detecting Ruby Exceptions The rubygem-abrt package registers a custom handler using the at_exit feature, which is executed when a program ends. This allows for checking for possible unhandled exceptions. Every time an unhandled exception is captured, the ABRT handler prepares a bug report, which can be submitted to Red Hat Bugzilla using standard ABRT tools. 25.4.4. Detecting Java Exceptions The ABRT Java Connector is a JVM agent that reports uncaught Java exceptions to abrtd . The agent registers several JVMTI event callbacks and has to be loaded into the JVM using the -agentlib command line parameter. Note that the processing of the registered callbacks negatively impacts the performance of the application. Use the following command to have ABRT catch exceptions from a Java class: In the above command, replace USDMyClass with the name of the Java class you want to test. By passing the abrt=on option to the connector, you ensure that the exceptions are handled by abrtd . In case you want to have the connector output the exceptions to standard output, omit this option. 25.4.5. Detecting X.Org Crashes The abrt-xorg service collects and processes information about crashes of the X.Org server from the /var/log/Xorg.0.log file. Note that no report is generated if a blacklisted X.org module is loaded. Instead, a not-reportable file is created in the problem-data directory with an appropriate explanation. You can find the list of offending modules in the /etc/abrt/plugins/xorg.conf file. Only proprietary graphics-driver modules are blacklisted by default. 25.4.6. Detecting Kernel Oopses and Panics By checking the output of kernel logs, ABRT is able to catch and process the so-called kernel oopses - non-fatal deviations from the correct behavior of the Linux kernel. This functionality is provided by the abrt-oops service. ABRT can also detect and process kernel panics - fatal, non-recoverable errors that require a reboot, using the abrt-vmcore service. The service only starts when a vmcore file (a kernel-core dump) appears in the /var/crash/ directory. When a core-dump file is found, abrt-vmcore creates a new problem-data directory in the /var/spool/abrt/ directory and copies the core-dump file to the newly created problem-data directory. After the /var/crash/ directory is searched, the service is stopped. For ABRT to be able to detect a kernel panic, the kdump service must be enabled on the system. The amount of memory that is reserved for the kdump kernel has to be set correctly. You can set it using the system-config-kdump graphical tool or by specifying the crashkernel parameter in the list of kernel options in the GRUB 2 menu. For details on how to enable and configure kdump , see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide . For information on making changes to the GRUB 2 menu see Chapter 26, Working with GRUB 2 . Using the abrt-pstoreoops service, ABRT is capable of collecting and processing information about kernel panics, which, on systems that support pstore , is stored in the automatically-mounted /sys/fs/pstore/ directory. The platform-dependent pstore interface (persistent storage) provides a mechanism for storing data across system reboots, thus allowing for preserving kernel panic information. The service starts automatically when kernel crash-dump files appear in the /sys/fs/pstore/ directory. 25.5. Handling Detected Problems Problem data saved by abrtd can be viewed, reported, and deleted using either the command line tool, abrt-cli , or the graphical tool, gnome-abrt . Note Note that ABRT identifies duplicate problems by comparing new problems with all locally saved problems. For a repeating crash, ABRT requires you to act upon it only once. However, if you delete the crash dump of that problem, the time this specific problem occurs, ABRT will treat it as a new crash: ABRT will alert you about it, prompt you to fill in a description, and report it. To avoid having ABRT notifying you about a recurring problem, do not delete its problem data. 25.5.1. Using the Command Line Tool In the command line environment, the user is notified of new crashes on login, provided they have the abrt-console-notification package installed. The console notification looks like the following: To view detected problems, enter the abrt-cli list command: Each crash listed in the output of the abrt-cli list command has a unique identifier and a directory that can be used for further manipulation using abrt-cli . To view information about just one particular problem, use the abrt-cli info command: To increase the amount of information displayed when using both the list and info sub-commands, pass them the -d ( --detailed ) option, which shows all stored information about the problems listed, including respective backtrace files if they have already been generated. To analyze and report a certain problem, use the abrt-cli report command: Upon invocation of the above command, you will be asked to provide your credentials for opening a support case with Red Hat Customer Support. , abrt-cli opens a text editor with the content of the report. You can see what is being reported, and you can fill in instructions on how to reproduce the crash and other comments. You should also check the backtrace because the backtrace might be sent to a public server and viewed by anyone, depending on the problem-reporter event settings. Note You can choose which text editor is used to check the reports. abrt-cli uses the editor defined in the ABRT_EDITOR environment variable. If the variable is not defined, it checks the VISUAL and EDITOR variables. If none of these variables is set, the vi editor is used. You can set the preferred editor in your .bashrc configuration file. For example, if you prefer GNU Emacs , add the following line to the file: When you are done with the report, save your changes and close the editor. If you have reported your problem to the Red Hat Customer Support database, a problem case is filed in the database. From now on, you will be informed about the problem-resolution progress via email you provided during the process of reporting. You can also monitor the problem case using the URL that is provided to you when the problem case is created or via emails received from Red Hat Support. If you are certain that you do not want to report a particular problem, you can delete it. To delete a problem, so that ABRT does not keep information about it, use the command: To display help about a particular abrt-cli command, use the --help option: 25.5.2. Using the GUI The ABRT daemon broadcasts a D-Bus message whenever a problem report is created. If the ABRT applet is running in a graphical desktop environment, it catches this message and displays a notification dialog on the desktop. You can open the ABRT GUI using this dialog by clicking on the Report button. You can also open the ABRT GUI by selecting the Applications Sundry Automatic Bug Reporting Tool menu item. Alternatively, you can run the ABRT GUI from the command line as follows: The ABRT GUI window displays a list of detected problems. Each problem entry consists of the name of the failing application, the reason why the application crashed, and the date of the last occurrence of the problem. Figure 25.3. ABRT GUI To access a detailed problem description, double-click on a problem-report line or click on the Report button while the respective problem line is selected. You can then follow the instructions to proceed with the process of describing the problem, determining how it should be analyzed, and where it should be reported. To discard a problem, click on the Delete button. 25.6. Additional Resources For more information about ABRT and related topics, see the resources listed below. Installed Documentation abrtd (8) - The manual page for the abrtd daemon provides information about options that can be used with the daemon. abrt_event.conf (5) - The manual page for the abrt_event.conf configuration file describes the format of its directives and rules and provides reference information about event meta-data configuration in XML files. Online Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces and network services on this system. Red Hat Enterprise Linux 7 Kernel Crash Dump Guide - The Kernel Crash Dump Guide for Red Hat Enterprise Linux 7 documents how to configure, test, and use the kdump crash recovery service and provides a brief overview of how to analyze the resulting core dump using the crash debugging utility. See Also Chapter 23, Viewing and Managing Log Files describes the configuration of the rsyslog daemon and the systemd journal and explains how to locate, view, and monitor system logs. Chapter 9, Yum describes how to use the Yum package manager to search, install, update, and uninstall packages on the command line. Chapter 10, Managing Services with systemd provides an introduction to systemd and documents how to use the systemctl command to manage system services, configure systemd targets, and execute power management commands. | [
"|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e",
"~]# yum install abrt-desktop",
"~]USD ps -el | grep abrt-applet 0 S 500 2036 1824 0 80 0 - 61604 poll_s ? 00:00:00 abrt-applet",
"~]USD abrt-applet & [1] 2261",
"~]# yum install abrt-cli",
"~]# yum install libreport-plugin-mailx",
"~]# yum install abrt-java-connector",
"~]USD systemctl is-active abrtd.service active",
"~]# systemctl start abrtd.service",
"~]USD sleep 100 & [1] 2823 ~]USD kill -s SIGSEGV 2823",
"EVENT=post-create date > /tmp/dt echo USDHOSTNAME uname -r",
"EVENT=analyze_xsession_errors analyzer=CCpp dso_list~=. /libX11. test -f ~/.xsession-errors || { echo \"No ~/.xsession-errors\"; exit 1; } test -r ~/.xsession-errors || { echo \"Can't read ~/.xsession-errors\"; exit 1; } executable= cat executable && base_executable=USD{executable##*/} && grep -F -e \"USDbase_executable\" ~/.xsession-errors | tail -999 >xsession_errors && echo \"Element 'xsession_errors' saved\"",
"{blank}",
"{blank}",
"~]# abrt-auto-reporting enabled",
"|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e",
"~]USD python -S file.py",
"~]USD java -agentlib:abrt-java-connector=abrt=on USDMyClass -platform.jvmtiSupported true",
"ABRT has detected 1 problem(s). For more info run: abrt-cli list --since 1398783164",
"~]USD abrt-cli list id 6734c6f1a1ed169500a7bfc8bd62aabaf039f9aa Directory: /var/tmp/abrt/ccpp-2014-04-21-09:47:51-3430 count: 1 executable: /usr/bin/sleep package: coreutils-8.22-11.el7 time: Mon 21 Apr 2014 09:47:51 AM EDT uid: 1000 Run 'abrt-cli report /var/tmp/abrt/ccpp-2014-04-21-09:47:51-3430' for creating a case in Red Hat Customer Portal",
"abrt-cli info -d directory_or_id",
"abrt-cli report directory_or_id",
"export VISUAL = emacs",
"abrt-cli rm directory_or_id",
"abrt-cli command --help",
"~]USD gnome-abrt &"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-abrt |
Chapter 13. Open Container Initiative support and Red Hat Quay | Chapter 13. Open Container Initiative support and Red Hat Quay Container registries such as Red Hat Quay were originally designed to support container images in the Docker image format. To promote the use of additional runtimes apart from Docker, the Open Container Initiative (OCI) was created to provide a standardization surrounding container runtimes and image formats. Most container registries support the OCI standardization as it is based on the Docker image manifest V2, Schema 2 format. In addition to container images, a variety of artifacts have emerged that support not just individual applications, but also the Kubernetes platform as a whole. These range from Open Policy Agent (OPA) policies for security and governance to Helm charts and Operators that aid in application deployment. Red Hat Quay is a private container registry that not only stores container images, but supports an entire ecosystem of tooling to aid in the management of containers. Prior to version 3.6, Red Hat Quay only supported Helm, which is considered to be the de facto package manager for Kubernetes. Helm simplifies how applications are packaged and deployed. Helm uses a packaging format called Charts which contain the Kubernetes resources representing an application. Charts can be made available for general distribution and consumption in repositories. A Helm repository is an HTTP server that serves an index.yaml metadata file and, optionally, a set of packaged charts. Beginning with Helm version 3, support was made available for distributing charts in OCI registries as an alternative to a traditional repository. As an enhance to Helm support, Red Hat Quay introduced support for OCI-based artifacts from version 3.6 to include support for cosign, the ZStandard compression scheme, and other OCI media types. Support for Helm and other OCI artifacts are now enabled by default under the FEATURE_GENERAL_OCI_SUPPORT configuration field, and can be expanded to other artifact types using the ALLOWED_OCI_ARTIFACT_TYPES and IGNORE_UNKNOWN_MEDIATYPES fields. Because of the addition of FEATURE_GENERAL_OCI_SUPPORT , ALLOWED_OCI_ARTIFACT_TYPES , and IGNORE_UNKNOWN_MEDIATYPES , the FEATURE_HELM_OCI_SUPPORT configuration field has been deprecated. This configuration field is no longer supported and will be removed in a future version of Red Hat Quay. 13.1. Helm and OCI prerequisites Prior to enabling Helm and other Open Container Initiative (OCI) artifact types, you must meet the following prerequisites. 13.1.1. Installing Helm Use the following procedure to install the Helm client. Procedure Download the latest version of Helm from the Helm releases page. Enter the following command to unpack the Helm binary: USD tar -zxvf helm-v3.8.2-linux-amd64.tar.gz Move the Helm binary to the desired location: USD mv linux-amd64/helm /usr/local/bin/helm For more information about installing Helm, see the Installing Helm documentation. 13.1.2. Upgrading to Helm 3.8 Support for OCI registry charts requires that Helm has been upgraded to at least 3.8. If you have already downloaded Helm and need to upgrade to Helm 3.8, see the Helm Upgrade documentation. 13.1.3. Enabling your system to trust SSL/TLS certificates used by Red Hat Quay Communication between the Helm client and Red Hat Quay is facilitated over HTTPS. As of Helm 3.5, support is only available for registries communicating over HTTPS with trusted certificates. In addition, the operating system must trust the certificates exposed by the registry. You must ensure that your operating system has been configured to trust the certificates used by Red Hat Quay. Use the following procedure to enable your system to trust the custom certificates. Procedure Enter the following command to copy the rootCA.pem file to the /etc/pki/ca-trust/source/anchors/ folder: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the CA trust store: USD sudo update-ca-trust extract 13.1.4. Creating an organization for Helm in Red Hat Quay It is recommended that you create a new organization for storing Helm charts in Red Hat Quay after you have downloaded the Helm client. Use the following procedure to create a new organization using the Red Hat Quay UI. Procedure Log in to your Red Hat Quay deployment. Click Create New Organization . Enter a name for the organization, for example, helm . Then, click Create Organization . 13.2. Using Helm charts with Red Hat Quay Use the following example to download and push an etherpad chart from the Red Hat Community of Practice (CoP) repository. Procedure As a Red Hat Quay administrators, enable support for Helm by setting FEATURE_GENERAL_OCI_SUPPORT to true in your config.yaml file: FEATURE_GENERAL_OCI_SUPPORT: true Add a chart repository: USD helm repo add redhat-cop https://redhat-cop.github.io/helm-charts Update the information of available charts locally from the chart repository: USD helm repo update Download a chart from a repository: USD helm pull redhat-cop/etherpad --version=0.0.4 --untar Package the chart into a chart archive: USD helm package ./etherpad Example output Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz Log in to your Quay repository using helm registry login : USD helm registry login quay370.apps.quayperf370.perfscale.devcluster.openshift.com Push the chart to your Quay repository using the helm push command: USD helm push etherpad-0.0.4.tgz oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com Example output: Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b Ensure that the push worked by deleting the local copy, and then pulling the chart from the repository: USD rm -rf etherpad-0.0.4.tgz USD helm pull oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad --version 0.0.4 Example output: Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902 13.3. Cosign OCI support with Red Hat Quay Cosign is a tool that can be used to sign and verify container images. It uses the ECDSA-P256 signature algorithm and Red Hat's Simple Signing payload format to create public keys that are stored in PKIX files. Private keys are stored as encrypted PEM files. Cosign currently supports the following: Hardware and KMS Signing Bring-your-own PKI OIDC PKI Built-in binary transparency and timestamping service 13.4. Installing and using Cosign for Red Hat Quay Use the following procedure to directly install Cosign. Prerequisites You have installed Go version 1.16 or later. You have set FEATURE_GENERAL_OCI_SUPPORT to true in your config.yaml file. Procedure Enter the following go command to directly install Cosign: USD go install github.com/sigstore/cosign/cmd/[email protected] Example output go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0 Generate a keypair for Cosign by entering the following command: USD cosign generate-key-pair Example output Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub Sign the keypair by entering the following command: USD cosign sign -key cosign.key quay-server.example.com/user1/busybox:test Example output Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig If you experience the error: signing quay-server.example.com/user1/busybox:test: getting remote image: GET https://quay-server.example.com/v2/user1/busybox/manifests/test : UNAUTHORIZED: access to the requested resource is not authorized; map[] error, which occurs because Cosign relies on ~./docker/config.json for authorization, you might need to execute the following command: USD podman login --authfile ~/.docker/config.json quay-server.example.com Example output Username: Password: Login Succeeded! Enter the following command to see the updated authorization configuration: USD cat ~/.docker/config.json { "auths": { "quay-server.example.com": { "auth": "cXVheWFkbWluOnBhc3N3b3Jk" } } 13.5. Using other artifact types with Red Hat Quay Other artifact types that are not supported by default can be added to your Red Hat Quay deployment by using the ALLOWED_OCI_ARTIFACT_TYPES configuration field. Use the following procdure to add additional OCI media types. Prerequisites You have set FEATURE_GENERAL_OCI_SUPPORT to true in your config.yaml file. Procedure In your config.yaml file, add the ALLOWED_OCI_ARTIFACT_TYPES configuration field. For example: FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4> Add support for your desired artifact type, for example, Singularity Image Format (SIF), by adding the following to your config.yaml file: ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar Important When adding artifact types that are not configured by default, Red Hat Quay administrators will also need to manually add support for Cosign and Helm if desired. Now, users can tag SIF images for their Red Hat Quay registry. 13.6. Disabling OCI artifacts in Red Hat Quay Use the following procedure to disable support for OCI artifacts. Procedure Disable OCI artifact support by setting FEATURE_GENERAL_OCI_SUPPORT to false in your config.yaml file. For example: FEATURE_GENERAL_OCI_SUPPORT = false | [
"tar -zxvf helm-v3.8.2-linux-amd64.tar.gz",
"mv linux-amd64/helm /usr/local/bin/helm",
"sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"FEATURE_GENERAL_OCI_SUPPORT: true",
"helm repo add redhat-cop https://redhat-cop.github.io/helm-charts",
"helm repo update",
"helm pull redhat-cop/etherpad --version=0.0.4 --untar",
"helm package ./etherpad",
"Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz",
"helm registry login quay370.apps.quayperf370.perfscale.devcluster.openshift.com",
"helm push etherpad-0.0.4.tgz oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com",
"Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b",
"rm -rf etherpad-0.0.4.tgz",
"helm pull oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad --version 0.0.4",
"Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902",
"go install github.com/sigstore/cosign/cmd/[email protected]",
"go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0",
"cosign generate-key-pair",
"Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub",
"cosign sign -key cosign.key quay-server.example.com/user1/busybox:test",
"Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig",
"podman login --authfile ~/.docker/config.json quay-server.example.com",
"Username: Password: Login Succeeded!",
"cat ~/.docker/config.json { \"auths\": { \"quay-server.example.com\": { \"auth\": \"cXVheWFkbWluOnBhc3N3b3Jk\" } }",
"FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4>",
"ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar",
"FEATURE_GENERAL_OCI_SUPPORT = false"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/oci-intro |
Chapter 3. Ansible vault | Chapter 3. Ansible vault Sometimes your playbook needs to use sensitive data such as passwords, API keys, and other secrets to configure managed hosts. Storing this information in plain text in variables or other Ansible-compatible files is a security risk because any user with access to those files can read the sensitive data. With Ansible vault, you can encrypt, decrypt, view, and edit sensitive information. They could be included as: Inserted variable files in an Ansible Playbook Host and group variables Variable files passed as arguments when executing the playbook Variables defined in Ansible roles You can use Ansible vault to securely manage individual variables, entire files, or even structured data like YAML files. This data can then be safely stored in a version control system or shared with team members without exposing sensitive information. Important Files are protected with symmetric encryption of the Advanced Encryption Standard (AES256), where a single password or passphrase is used both to encrypt and decrypt the data. Note that the way this is done has not been formally audited by a third party. To simplify management, it makes sense to set up your Ansible project so that sensitive variables and all other variables are kept in separate files, or directories. Then you can protect the files containing sensitive variables with the ansible-vault command. Creating an encrypted file The following command prompts you for a new vault password. Then it opens a file for storing sensitive variables using the default editor. Viewing an encrypted file The following command prompts you for your existing vault password. Then it displays the sensitive contents of an already encrypted file. Editing an encrypted file The following command prompts you for your existing vault password. Then it opens the already encrypted file for you to update the sensitive variables using the default editor. Encrypting an existing file The following command prompts you for a new vault password. Then it encrypts an existing unencrypted file. Decrypting an existing file The following command prompts you for your existing vault password. Then it decrypts an existing encrypted file. Changing the password of an encrypted file The following command prompts you for your original vault password, then for the new vault password. Basic application of Ansible vault variables in a playbook --- - name: Create user accounts for all servers hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create user from vault.yml file user: name: "{{ username }}" password: "{{ pwhash }}" You read-in the file with variables ( vault.yml ) in the vars_files section of your Ansible Playbook, and you use the curly brackets the same way you would do with your ordinary variables. Then you either run the playbook with the ansible-playbook --ask-vault-pass command and you enter the password manually. Or you save the password in a separate file and you run the playbook with the ansible-playbook --vault-password-file /path/to/my/vault-password-file command. Additional resources ansible-vault(1) , ansible-playbook(1) man pages on your system Ansible vault Ansible vault Best Practices | [
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"ansible-vault view vault.yml Vault password: <vault_password> my_secret: \"yJJvPqhsiusmmPPZdnjndkdnYNDjdj782meUZcw\"",
"ansible-vault edit vault.yml Vault password: <vault_password>",
"ansible-vault encrypt vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password> Encryption successful",
"ansible-vault decrypt vault.yml Vault password: <vault_password> Decryption successful",
"ansible-vault rekey vault.yml Vault password: <vault_password> New Vault password: <vault_password> Confirm New Vault password: <vault_password> Rekey successful",
"--- - name: Create user accounts for all servers hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create user from vault.yml file user: name: \"{{ username }}\" password: \"{{ pwhash }}\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/ansible-vault_automating-system-administration-by-using-rhel-system-roles |
Schedule and quota APIs | Schedule and quota APIs OpenShift Container Platform 4.18 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/schedule_and_quota_apis/index |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.9/proc-providing-feedback-on-redhat-documentation |
Chapter 12. Creating a Ceph key for external access | Chapter 12. Creating a Ceph key for external access Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. External access to Ceph storage is access to Ceph from any site that is not local. Ceph storage at the cental location is external for edge (DCN) sites, just as Ceph storage at the edge is external for the central location. When you deploy the central or DCN sites with Ceph storage, you have the option of using the default openstack keyring for both local and external access. Altenatively, you can create a separate key for access by non-local sites. If you decide to use additional Ceph keys for access to your external sites, each key must have the same name. The key name is external in the examples that follow. If you use a separate key for access by non-local sites, you have the additional security benefit of being able to revoke and re-issue the external key in response to a security event without interrupting local access. However, using a separate key for external access will result in the loss of access to some features, such as cross availability zone backups and offline volume migration. You must balance the needs of your security posture against the desired feature set. By default, the keys for the central and all DCN sites will be shared. 12.1. Creating a Ceph key for external access Complete the following steps to create an external key for non-local access. Process Create a Ceph key for external access. This key is sensitive. You can generate the key using the following: In the directory of the stack you are deploying, create a ceph_keys.yaml environment file with contents like the following, using the output from the command for the key: Include the ceph_keys.yaml environment file in the deployment of the site. For example, to deploy the central site with with the ceph_keys.yaml environment file, run a command like the following: 12.2. Using external Ceph keys You can only use keys that have already been deployed. For information on deploying a site with an external key, see Section 12.1, "Creating a Ceph key for external access" . This should be done for both central and edge sites. When you deploy an edge site that will use an external key provided by central, complete the following: Create dcn_ceph_external.yaml environment file for the edge site. You must include the cephx-key-client-name option to specify the deployed key to include. Include the dcn_ceph_external.yaml file so that the edge site can access the Ceph cluster at the central site. Include the ceph_keys.yaml file to deploy an external key for the Ceph cluster at the edge site. When you update the central location after deploying your edge sites, ensure the central location to use the dcn external keys: Ensure that the CephClientUserName parameter matches the key being exported. If you are using the name external as shown in these examples, create glance_update.yaml to be similar to the following: Use the openstack overcloud export ceph command to include the external keys for DCN edge access from the central location. To do this you must provide a a comma-delimited list of stacks for the --stack argument, and include the cephx-key-client-name option: Redeploy the central site using the original templates and include the newly created dcn_ceph_external.yaml and glance_update.yaml files. | [
"python3 -c 'import os,struct,time,base64; key = os.urandom(16) ; header = struct.pack(\"<hiih\", 1, int(time.time()), 0, len(key)) ; print(base64.b64encode(header + key).decode())'",
"parameter_defaults: CephExtraKeys: - name: \"client.external\" caps: mgr: \"allow *\" mon: \"profile rbd\" osd: \"profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images\" key: \"AQD29WteAAAAABAAphgOjFD7nyjdYe8Lz0mQ5Q==\" mode: \"0600\"",
"overcloud deploy --stack central --templates /usr/share/openstack-tripleo-heat-templates/ .... -e ~/central/ceph_keys.yaml",
"sudo -E openstack overcloud export ceph --stack central --cephx-key-client-name external --output-file ~/dcn-common/dcn_ceph_external.yaml",
"parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' CephClusterName: central GlanceBackendID: central GlanceMultistoreConfig: dcn0: GlanceBackend: rbd GlanceStoreDescription: 'dcn0 rbd glance store' CephClientUserName: 'external' CephClusterName: dcn0 GlanceBackendID: dcn0 dcn1: GlanceBackend: rbd GlanceStoreDescription: 'dcn1 rbd glance store' CephClientUserName: 'external' CephClusterName: dcn1 GlanceBackendID: dcn1",
"sudo -E openstack overcloud export ceph --stack dcn0,dcn1,dcn2 --cephx-key-client-name external --output-file ~/central/dcn_ceph_external.yaml",
"openstack overcloud deploy --stack central --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/central/central_roles.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/central/central-images-env.yaml -e ~/central/role-counts.yaml -e ~/central/site-name.yaml -e ~/central/ceph.yaml -e ~/central/ceph_keys.yaml -e ~/central/glance.yaml -e ~/central/dcn_ceph_external.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/external-option |
7.161. pki-core | 7.161. pki-core 7.161.1. RHSA-2015:1347 - Moderate: pki-core security and bug fix update Updated pki-core packages that fix one security issue and several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link in the References section. Red Hat Certificate System is an enterprise software system designed to manage enterprise public key infrastructure (PKI) deployments. PKI Core contains fundamental packages required by Red Hat Certificate System, which comprise the Certificate Authority (CA) subsystem. Security Fix CVE-2012-2662 Multiple cross-site scripting flaws were discovered in the Red Hat Certificate System Agent and End Entity pages. An attacker could use these flaws to perform a cross-site scripting (XSS) attack against victims using the Certificate System's web interface. Bug Fixes BZ# 1171848 Previously, pki-core required the SSL version 3 (SSLv3) protocol ranges to communicate with the 389-ds-base packages. However, recent changes to 389-ds-base disabled the default use of SSLv3 and enforced using protocol ranges supported by secure protocols, such as the TLS protocol. As a consequence, the CA failed to install during an Identity Management (IdM) server installation. This update adds TLS-related parameters to the server.xml file of the CA to fix this problem, and running the ipa-server-install command now installs the CA as expected. BZ# 1212557 Previously, the ipa-server-install script failed when attempting to configure a stand-alone CA on systems with OpenJDK version 1.8.0 installed. The pki-core build and runtime dependencies have been modified to use OpenJDK version 1.7.0 during the stand-alone CA configuration. As a result, ipa-server-install no longer fails in this situation. BZ# 1225589 Creating a Red Hat Enterprise Linux 7 replica from a Red Hat Enterprise Linux 6 replica running the CA service sometimes failed in IdM deployments where the initial Red Hat Enterprise Linux 6 CA master had been removed. This could cause problems in some situations, such as when migrating from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. The bug occurred due to a problem in a version of IdM where the subsystem user, created during the initial CA server installation, was removed together with the initial master. This update adds the restore-subsystem-user.py script that restores the subsystem user in the described situation, thus enabling administrators to create a Red Hat Enterprise Linux 7 replica in this scenario. BZ# 1144188 Several Java import statements specify wildcard arguments. However, due to the use of wildcard arguments in the import statements of the source code contained in the Red Hat Enterprise Linux 6 maintenance branch, a name space collision created the potential for an incorrect class to be utilized. As a consequence, the Token Processing System (TPS) rebuild test failed with an error message. This update addresses the bug by supplying the fully named class in all of the affected areas, and the TPS rebuild test no longer fails. BZ# 1144608 Previously, pki-core failed to build with the rebased version of the CMake build system during the TPS rebuild test. The pki-core build files have been updated to comply with the rebased version of CMake. As a result, pki-core builds successfully in the described scenario. Users of pki-core are advised to upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-pki-core |
C.3. Inheritance, the <resources> Block, and Reusing Resources | C.3. Inheritance, the <resources> Block, and Reusing Resources Some resources benefit by inheriting values from a parent resource; that is commonly the case in an NFS service. Example C.5, "NFS Service Set Up for Resource Reuse and Inheritance" shows a typical NFS service configuration, set up for resource reuse and inheritance. Example C.5. NFS Service Set Up for Resource Reuse and Inheritance If the service were flat (that is, with no parent/child relationships), it would need to be configured as follows: The service would need four nfsclient resources - one per file system (a total of two for file systems), and one per target machine (a total of two for target machines). The service would need to specify export path and file system ID to each nfsclient, which introduces chances for errors in the configuration. In Example C.5, "NFS Service Set Up for Resource Reuse and Inheritance" however, the NFS client resources nfsclient:bob and nfsclient:jim are defined once; likewise, the NFS export resource nfsexport:exports is defined once. All the attributes needed by the resources are inherited from parent resources. Because the inherited attributes are dynamic (and do not conflict with one another), it is possible to reuse those resources - which is why they are defined in the resources block. It may not be practical to configure some resources in multiple places. For example, configuring a file system resource in multiple places can result in mounting one file system on two nodes, therefore causing problems. | [
"<resources> <nfsclient name=\"bob\" target=\"bob.example.com\" options=\"rw,no_root_squash\"/> <nfsclient name=\"jim\" target=\"jim.example.com\" options=\"rw,no_root_squash\"/> <nfsexport name=\"exports\"/> </resources> <service name=\"foo\"> <fs name=\"1\" mountpoint=\"/mnt/foo\" device=\"/dev/sdb1\" fsid=\"12344\"> <nfsexport ref=\"exports\"> <!-- nfsexport's path and fsid attributes are inherited from the mountpoint & fsid attribute of the parent fs resource --> <nfsclient ref=\"bob\"/> <!-- nfsclient's path is inherited from the mountpoint and the fsid is added to the options string during export --> <nfsclient ref=\"jim\"/> </nfsexport> </fs> <fs name=\"2\" mountpoint=\"/mnt/bar\" device=\"/dev/sdb2\" fsid=\"12345\"> <nfsexport ref=\"exports\"> <nfsclient ref=\"bob\"/> <!-- Because all of the critical data for this resource is either defined in the resources block or inherited, we can reference it again! --> <nfsclient ref=\"jim\"/> </nfsexport> </fs> <ip address=\"10.2.13.20\"/> </service>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clust-rsc-inherit-resc-reuse-ca |
Chapter 6. Directory Server in Red Hat Enterprise Linux | Chapter 6. Directory Server in Red Hat Enterprise Linux Directory Server now supports enabling and disabling specific TLS versions Previously, Directory Server running on Red Hat Enterprise Linux 6 provided no configuration options to enable or disable specific TLS versions. For example, it was not possible to disable the insecure TLS 1.0 protocol while keeping later versions enabled. This updates adds the nsTLS10 , nsTLS11 , and nsTLS12 parameters to the cn=encryption,cn=config entry. As a result, it is now possible to configure specific TLS protocol versions in Directory Server. Note, that these parameters have a higher priority than the nsTLS1 parameter, that enables or disables all TLS protocol versions. (BZ# 1330758 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/new_features_directory_server_in_red_hat_enterprise_linux |
Chapter 17. Debugging low latency node tuning status | Chapter 17. Debugging low latency node tuning status Use the PerformanceProfile custom resource (CR) status fields for reporting tuning status and debugging latency issues in the cluster node. 17.1. Debugging low latency CNF tuning status The PerformanceProfile custom resource (CR) contains status fields for reporting tuning status and debugging latency degradation issues. These fields report on conditions that describe the state of the operator's reconciliation functionality. A typical issue can arise when the status of machine config pools that are attached to the performance profile are in a degraded state, causing the PerformanceProfile status to degrade. In this case, the machine config pool issues a failure message. The Node Tuning Operator contains the performanceProfile.spec.status.Conditions status field: Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded The Status field contains Conditions that specify Type values that indicate the status of the performance profile: Available All machine configs and Tuned profiles have been created successfully and are available for cluster components are responsible to process them (NTO, MCO, Kubelet). Upgradeable Indicates whether the resources maintained by the Operator are in a state that is safe to upgrade. Progressing Indicates that the deployment process from the performance profile has started. Degraded Indicates an error if: Validation of the performance profile has failed. Creation of all relevant components did not complete successfully. Each of these types contain the following fields: Status The state for the specific type ( true or false ). Timestamp The transaction timestamp. Reason string The machine readable reason. Message string The human readable reason describing the state and error details, if any. 17.1.1. Machine config pools A performance profile and its created products are applied to a node according to an associated machine config pool (MCP). The MCP holds valuable information about the progress of applying the machine configurations created by performance profiles that encompass kernel args, kube config, huge pages allocation, and deployment of rt-kernel. The Performance Profile controller monitors changes in the MCP and updates the performance profile status accordingly. The only conditions returned by the MCP to the performance profile status is when the MCP is Degraded , which leads to performanceProfile.status.condition.Degraded = true . Example The following example is for a performance profile with an associated machine config pool ( worker-cnf ) that was created for it: The associated machine config pool is in a degraded state: # oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h The describe section of the MCP shows the reason: # oc describe mcp worker-cnf Example output Message: Node node-worker-cnf is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found" Reason: 1 nodes are reporting degraded status on sync The degraded state should also appear under the performance profile status field marked as degraded = true : # oc describe performanceprofiles performance Example output Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found". Reason: MCPDegraded Status: True Type: Degraded 17.2. Collecting low latency tuning debugging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including node tuning, NUMA topology, and other information needed to debug issues with low latency setup. For prompt support, supply diagnostic information for both OpenShift Container Platform and low latency tuning. 17.2.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, such as: Resource definitions Audit logs Service logs You can specify one or more images when you run the command by including the --image argument. When you specify an image, the tool collects data related to that feature or product. When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in your current working directory. 17.2.2. Gathering low latency tuning data Use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with low latency tuning, including: The Node Tuning Operator namespaces and child objects. MachineConfigPool and associated MachineConfig objects. The Node Tuning Operator and associated Tuned objects. Linux kernel command line options. CPU and NUMA topology Basic PCI device information and NUMA locality. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI (oc) installed. Procedure Navigate to the directory where you want to store the must-gather data. Collect debugging information by running the following command: USD oc adm must-gather Example output [must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at "<cluster_version>" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version... [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default... [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift... [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system... [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd... ... Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at "<cluster_version>" ClusterOperators: All healthy and stable Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1 1 Replace must-gather-local.5421342344627712289// with the directory name created by the must-gather tool. Note Create a compressed file to attach the data to a support case or to use with the Performance Profile Creator wrapper script when you create a performance profile. Attach the compressed file to your support case on the Red Hat Customer Portal . Additional resources Gathering data about your cluster with the must-gather tool Managing nodes with MachineConfig and KubeletConfig CRs Using the Node Tuning Operator Configuring huge pages at boot time How huge pages are consumed by apps | [
"Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h",
"oc describe mcp worker-cnf",
"Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync",
"oc describe performanceprofiles performance",
"Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded",
"oc adm must-gather",
"[must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable",
"tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/cnf-debugging-low-latency-tuning-status |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/api_documentation/making-open-source-more-inclusive |
Chapter 5. HAProxy Configuration | Chapter 5. HAProxy Configuration This chapter explains the configuration of a basic setup that highlights the common configuration options an administrator could encounter when deploying HAProxy services for high availability environments. HAProxy has its own set of scheduling algorithms for load balancing. These algorithms are described in Section 5.1, "HAProxy Scheduling Algorithms" . HAProxy is configured by editing the /etc/haproxy/haproxy.cfg file. Load Balancer configuration using HAProxy consists of five sections for configuration: Section 5.2, "Global Settings" The proxies section, which consists of 4 subsections: The Section 5.3, "Default Settings" settings The Section 5.4, "Frontend Settings" settings The Section 5.5, "Backend Settings" settings 5.1. HAProxy Scheduling Algorithms The HAProxy scheduling algorithms for load balancing can be edited in the balance parameter in the backend section of the /etc/haproxy/haproxy.cfg configuration file. Note that HAProxy supports configuration with multiple back ends, and each back end can be configured with a scheduling algorithm. Round-Robin ( roundrobin ) Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. Load Balancer round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. However, in HAProxy, since configuration of server weights can be done on the fly using this scheduler, the number of active servers are limited to 4095 per back end. Static Round-Robin ( static-rr ) Distributes each request sequentially around a pool of real servers as does Round-Robin , but does not allow configuration of server weight dynamically. However, because of the static nature of server weight, there is no limitation on the number of active servers in the back end. Least-Connection ( leastconn ) Distributes more requests to real servers with fewer active connections. Administrators with a dynamic environment with varying session or connection lengths may find this scheduler a better fit for their environments. It is also ideal for an environment where a group of servers have different capacities, as administrators can adjust weight on the fly using this scheduler. Source ( source ) Distributes requests to servers by hashing requesting source IP address and dividing by the weight of all the running servers to determine which server will get the request. In a scenario where all servers are running, the source IP request will be consistently served by the same real server. If there is a change in the number or weight of the running servers, the session may be moved to another server because the hash/weight result has changed. URI ( uri ) Distributes requests to servers by hashing the entire URI (or a configurable portion of a URI) and divides by the weight of all the running servers to determine which server will the request. In a scenario where all active servers are running, the destination IP request will be consistently served by the same real server. This scheduler can be further configured by the length of characters at the start of a directory part of a URI to compute the hash result and the depth of directories in a URI (designated by forward slashes in the URI) to compute the hash result. URL Parameter ( url_param ) Distributes requests to servers by looking up a particular parameter string in a source URL request and performing a hash calculation divided by the weight of all running servers. If the parameter is missing from the URL, the scheduler defaults to Round-robin scheduling. Modifiers may be used based on POST parameters as well as wait limits based on the number of maximum octets an administrator assigns to the weight for a certain parameter before computing the hash result. Header Name ( hdr ) Distributes requests to servers by checking a particular header name in each source HTTP request and performing a hash calculation divided by the weight of all running servers. If the header is absent, the scheduler defaults to Round-robin scheduling. RDP Cookie ( rdp-cookie ) Distributes requests to servers by looking up the RDP cookie for every TCP request and performing a hash calculation divided by the weight of all running servers. If the header is absent, the scheduler defaults to Round-robin scheduling. This method is ideal for persistence as it maintains session integrity. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/ch-haproxy-setup-VSA |
Getting Started with Fuse on Spring Boot | Getting Started with Fuse on Spring Boot Red Hat Fuse 7.13 Get started quickly with Red Hat Fuse on Spring Boot Red Hat Fuse Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_spring_boot/index |
Chapter 42. Working with Contexts | Chapter 42. Working with Contexts Abstract JAX-WS uses contexts to pass metadata along the messaging chain. This metadata, depending on its scope, is accessible to implementation level code. It is also accessible to JAX-WS handlers that operate on the message below the implementation level. 42.1. Understanding Contexts Overview In many instances it is necessary to pass information about a message to other parts of an application. Apache CXF does this using a context mechanism. Contexts are maps that hold properties relating to an outgoing or an incoming message. The properties stored in the context are typically metadata about the message, and the underlying transport used to communicate the message. For example, the transport specific headers used in transmitting the message, such as the HTTP response code or the JMS correlation ID, are stored in the JAX-WS contexts. The contexts are available at all levels of a JAX-WS application. However, they differ in subtle ways depending upon where in the message processing stack you are accessing the context. JAX-WS Handler implementations have direct access to the contexts and can access all properties that are set in them. Service implementations access contexts by having them injected, and can only access properties that are set in the APPLICATION scope. Consumer implementations can only access properties that are set in the APPLICATION scope. Figure 42.1, "Message Contexts and Message Processing Path" shows how the context properties pass through Apache CXF. As a message passes through the messaging chain, its associated message context passes along with it. Figure 42.1. Message Contexts and Message Processing Path How properties are stored in a context The message contexts are all implementations of the javax.xml.ws.handler.MessageContext interface. The MessageContext interface extends the java.util.Map<String key, Object value> interface. Map objects store information as key value pairs. In a message context, properties are stored as name/value pairs. A property's key is a String that identifies the property. The value of a property can be any value stored in any Java object. When the value is returned from a message context, the application must know the type to expect and cast accordingly. For example, if a property's value is stored in a UserInfo object it is still returned from a message context as an Object object that must be cast back into a UserInfo object. Properties in a message context also have a scope. The scope determines where a property can be accessed in the message processing chain. Property scopes Properties in a message context are scoped. A property can be in one of the following scopes: APPLICATION Properties scoped as APPLICATION are available to JAX-WS Handler implementations, consumer implementation code, and service provider implementation code. If a handler needs to pass a property to the service provider implementation, it sets the property's scope to APPLICATION . All properties set from either the consumer implementation or the service provider implementation contexts are automatically scoped as APPLICATION . HANDLER Properties scoped as HANDLER are only available to JAX-WS Handler implementations. Properties stored in a message context from a Handler implementation are scoped as HANDLER by default. You can change a property's scope using the message context's setScope() method. Example 42.1, "The MessageContext.setScope() Method" shows the method's signature. Example 42.1. The MessageContext.setScope() Method setScope String key MessageContext.Scope scope java.lang.IllegalArgumentException The first parameter specifies the property's key. The second parameter specifies the new scope for the property. The scope can be either: MessageContext.Scope.APPLICATION MessageContext.Scope.HANDLER Overview of contexts in handlers Classes that implement the JAX-WS Handler interface have direct access to a message's context information. The message's context information is passed into the Handler implementation's handleMessage() , handleFault() , and close() methods. Handler implementations have access to all of the properties stored in the message context, regardless of their scope. In addition, logical handlers use a specialized message context called a LogicalMessageContext . LogicalMessageContext objects have methods that access the contents of the message body. Overview of contexts in service implementations Service implementations can access properties scoped as APPLICATION from the message context. The service provider's implementation object accesses the message context through the WebServiceContext object. For more information see Section 42.2, "Working with Contexts in a Service Implementation" . Overview of contexts in consumer implementations Consumer implementations have indirect access to the contents of the message context. The consumer implementation has two separate message contexts: Request context - holds a copy of the properties used for outgoing requests Response context - holds a copy of the properties from an incoming response The dispatch layer transfers the properties between the consumer implementation's message contexts and the message context used by the Handler implementations. When a request is passed to the dispatch layer from the consumer implementation, the contents of the request context are copied into the message context that is used by the dispatch layer. When the response is returned from the service, the dispatch layer processes the message and sets the appropriate properties into its message context. After the dispatch layer processes a response, it copies all of the properties scoped as APPLICATION in its message context to the consumer implementation's response context. For more information see Section 42.3, "Working with Contexts in a Consumer Implementation" . 42.2. Working with Contexts in a Service Implementation Overview Context information is made available to service implementations using the WebServiceContext interface. From the WebServiceContext object you can obtain a MessageContext object that is populated with the current request's context properties in the application scope. You can manipulate the values of the properties, and they are propagated back through the response chain. Note The MessageContext interface inherits from the java.util.Map interface. Its contents can be manipulated using the Map interface's methods. Obtaining a context To obtain the message context in a service implementation do the following: Declare a variable of type WebServiceContext. Decorate the variable with the javax.annotation.Resource annotation to indicate that the context information is being injected into the variable. Obtain the MessageContext object from the WebServiceContext object using the getMessageContext() method. Important getMessageContext() can only be used in methods that are decorated with the @WebMethod annotation. Example 42.2, "Obtaining a Context Object in a Service Implementation" shows code for obtaining a context object. Example 42.2. Obtaining a Context Object in a Service Implementation Reading a property from a context Once you have obtained the MessageContext object for your implementation, you can access the properties stored there using the get() method shown in Example 42.3, "The MessageContext.get() Method" . Example 42.3. The MessageContext.get() Method V get Object key Note This get() is inherited from the Map interface. The key parameter is the string representing the property you want to retrieve from the context. The get() returns an object that must be cast to the proper type for the property. Table 42.1, "Properties Available in the Service Implementation Context" lists a number of the properties that are available in a service implementation's context. Important Changing the values of the object returned from the context also changes the value of the property in the context. Example 42.4, "Getting a Property from a Service's Message Context" shows code for getting the name of the WSDL operation element that represents the invoked operation. Example 42.4. Getting a Property from a Service's Message Context Setting properties in a context Once you have obtained the MessageContext object for your implementation, you can set properties, and change existing properties, using the put() method shown in Example 42.5, "The MessageContext.put() Method" . Example 42.5. The MessageContext.put() Method V put K key V value ClassCastExceptionIllegalArgumentExceptionNullPointerException If the property being set already exists in the message context, the put() method replaces the existing value with the new value and returns the old value. If the property does not already exist in the message context, the put() method sets the property and returns null . Example 42.6, "Setting a Property in a Service's Message Context" shows code for setting the response code for an HTTP request. Example 42.6. Setting a Property in a Service's Message Context Supported contexts Table 42.1, "Properties Available in the Service Implementation Context" lists the properties accessible through the context in a service implementation object. Table 42.1. Properties Available in the Service Implementation Context Property Name Description org.apache.cxf.message.Message PROTOCOL_HEADERS [a] Specifies the transport specific header information. The value is stored as a java.util.Map<String, List<String>> . RESPONSE_CODE Specifies the response code returned to the consumer. The value is stored as an Integer object. ENDPOINT_ADDRESS Specifies the address of the service provider. The value is stored as a String . HTTP_REQUEST_METHOD Specifies the HTTP verb sent with a request. The value is stored as a String . PATH_INFO Specifies the path of the resource being requested. The value is stored as a String . The path is the portion of the URI after the hostname and before any query string. For example, if an endpoint's URI is http://cxf.apache.org/demo/widgets the path is /demo/widgets . QUERY_STRING Specifies the query, if any, attached to the URI used to invoke the request. The value is stored as a String . Queries appear at the end of the URI after a ? . For example, if a request is made to http://cxf.apache.org/demo/widgets?color the query is color . MTOM_ENABLED Specifies whether or not the service provider can use MTOM for SOAP attachments. The value is stored as a Boolean . SCHEMA_VALIDATION_ENABLED Specifies whether or not the service provider validates messages against a schema. The value is stored as a Boolean . FAULT_STACKTRACE_ENABLED Specifies if the runtime provides a stack trace along with a fault message. The value is stored as a Boolean . CONTENT_TYPE Specifies the MIME type of the message. The value is stored as a String . BASE_PATH Specifies the path of the resource being requested. The value is stored as a java.net.URL . The path is the portion of the URI after the hostname and before any query string. For example, if an endpoint's URL is http://cxf.apache.org/demo/widgets the base path is /demo/widgets . ENCODING Specifies the encoding of the message. The value is stored as a String . FIXED_PARAMETER_ORDER Specifies whether the parameters must appear in the message in a particular order. The value is stored as a Boolean . MAINTAIN_SESSION Specifies if the consumer wants to maintain the current session for future requests. The value is stored as a Boolean . WSDL_DESCRIPTION Specifies the WSDL document that defines the service being implemented. The value is stored as a org.xml.sax.InputSource object. WSDL_SERVICE Specifies the qualified name of the wsdl:service element that defines the service being implemented. The value is stored as a QName . WSDL_PORT Specifies the qualified name of the wsdl:port element that defines the endpoint used to access the service. The value is stored as a QName . WSDL_INTERFACE Specifies the qualified name of the wsdl:portType element that defines the service being implemented. The value is stored as a QName . WSDL_OPERATION Specifies the qualified name of the wsdl:operation element that corresponds to the operation invoked by the consumer. The value is stored as a QName . javax.xml.ws.handler.MessageContext MESSAGE_OUTBOUND_PROPERTY Specifies if a message is outbound. The value is stored as a Boolean . true specifies that a message is outbound. INBOUND_MESSAGE_ATTACHMENTS Contains any attachments included in the request message. The value is stored as a java.util.Map<String, DataHandler> . The key value for the map is the MIME Content-ID for the header. OUTBOUND_MESSAGE_ATTACHMENTS Contains any attachments for the response message. The value is stored as a java.util.Map<String, DataHandler> . The key value for the map is the MIME Content-ID for the header. WSDL_DESCRIPTION Specifies the WSDL document that defines the service being implemented. The value is stored as a org.xml.sax.InputSource object. WSDL_SERVICE Specifies the qualified name of the wsdl:service element that defines the service being implemented. The value is stored as a QName . WSDL_PORT Specifies the qualified name of the wsdl:port element that defines the endpoint used to access the service. The value is stored as a QName . WSDL_INTERFACE Specifies the qualified name of the wsdl:portType element that defines the service being implemented. The value is stored as a QName . WSDL_OPERATION Specifies the qualified name of the wsdl:operation element that corresponds to the operation invoked by the consumer. The value is stored as a QName . HTTP_RESPONSE_CODE Specifies the response code returned to the consumer. The value is stored as an Integer object. HTTP_REQUEST_HEADERS Specifies the HTTP headers on a request. The value is stored as a java.util.Map<String, List<String>> . HTTP_RESPONSE_HEADERS Specifies the HTTP headers for the response. The value is stored as a java.util.Map<String, List<String>> . HTTP_REQUEST_METHOD Specifies the HTTP verb sent with a request. The value is stored as a String . SERVLET_REQUEST Contains the servlet's request object. The value is stored as a javax.servlet.http.HttpServletRequest . SERVLET_RESPONSE Contains the servlet's response object. The value is stored as a javax.servlet.http.HttpResponse . SERVLET_CONTEXT Contains the servlet's context object. The value is stored as a javax.servlet.ServletContext . PATH_INFO Specifies the path of the resource being requested. The value is stored as a String . The path is the portion of the URI after the hostname and before any query string. For example, if an endpoint's URL is http://cxf.apache.org/demo/widgets the path is /demo/widgets . QUERY_STRING Specifies the query, if any, attached to the URI used to invoke the request. The value is stored as a String . Queries appear at the end of the URI after a ? . For example, if a request is made to http://cxf.apache.org/demo/widgets?color the query string is color . REFERENCE_PARAMETERS Specifies the WS-Addressing reference parameters. This includes all of the SOAP headers whose wsa:IsReferenceParameter attribute is set to true . The value is stored as a java.util.List . org.apache.cxf.transport.jms.JMSConstants JMS_SERVER_HEADERS Contains the JMS message headers. For more information see Section 42.4, "Working with JMS Message Properties" . [a] When using HTTP this property is the same as the standard JAX-WS defined property. 42.3. Working with Contexts in a Consumer Implementation Overview Consumer implementations have access to context information through the BindingProvider interface. The BindingProvider instance holds context information in two separate contexts: Request Context The request context enables you to set properties that affect outbound messages. Request context properties are applied to a specific port instance and, once set, the properties affect every subsequent operation invocation made on the port, until such time as a property is explicitly cleared. For example, you might use a request context property to set a connection timeout or to initialize data for sending in a header. Response Context The response context enables you to read the property values set by the response to the last operation invocation made from the current thread. Response context properties are reset after every operation invocation. For example, you might access a response context property to read header information received from the last inbound message. Important Only information that is placed in the application scope of a message context can be accessed by the consumer implementation. Obtaining a context Contexts are obtained using the javax.xml.ws.BindingProvider interface. The BindingProvider interface has two methods for obtaining a context: getRequestContext() The getRequestContext() method, shown in Example 42.7, "The getRequestContext() Method" , returns the request context as a Map object. The returned Map object can be used to directly manipulate the contents of the context. Example 42.7. The getRequestContext() Method Map<String, Object> getRequestContext getResponseContext() The getResponseContext() , shown in Example 42.8, "The getResponseContext() Method" , returns the response context as a Map object. The returned Map object's contents reflect the state of the response context's contents from the most recent successful request on a remote service made in the current thread. Example 42.8. The getResponseContext() Method Map<String, Object> getResponseContext Since proxy objects implement the BindingProvider interface, a BindingProvider object can be obtained by casting a proxy object. The contexts obtained from the BindingProvider object are only valid for operations invoked on the proxy object used to create it. Example 42.9, "Getting a Consumer's Request Context" shows code for obtaining the request context for a proxy. Example 42.9. Getting a Consumer's Request Context Reading a property from a context Consumer contexts are stored in java.util.Map<String, Object> objects. The map has keys that are String objects and values that contain arbitrary objects. Use java.util.Map.get() to access an entry in the map of response context properties. To retrieve a particular context property, ContextPropertyName , use the code shown in Example 42.10, "Reading a Response Context Property" . Example 42.10. Reading a Response Context Property Setting properties in a context Consumer contexts are hash maps stored in java.util.Map<String, Object> objects. The map has keys that are String objects and values that are arbitrary objects. To set a property in a context use the java.util.Map.put() method. While you can set properties in both the request context and the response context, only the changes made to the request context have any impact on message processing. The properties in the response context are reset when each remote invocation is completed on the current thread. The code shown in Example 42.11, "Setting a Request Context Property" changes the address of the target service provider by setting the value of the BindingProvider.ENDPOINT_ADDRESS_PROPERTY. Example 42.11. Setting a Request Context Property Important Once a property is set in the request context its value is used for all subsequent remote invocations. You can change the value and the changed value will then be used. Supported contexts Apache CXF supports the following context properties in consumer implementations: Table 42.2. Consumer Context Properties Property Name Description javax.xml.ws.BindingProvider ENDPOINT_ADDRESS_PROPERTY Specifies the address of the target service. The value is stored as a String . USERNAME_PROPERTY [a] Specifies the username used for HTTP basic authentication. The value is stored as a String . PASSWORD_PROPERTY [b] Specifies the password used for HTTP basic authentication. The value is stored as a String . SESSION_MAINTAIN_PROPERTY [c] Specifies if the client wants to maintain session information. The value is stored as a Boolean object. org.apache.cxf.ws.addressing.JAXWSAConstants CLIENT_ADDRESSING_PROPERTIES Specifies the WS-Addressing information used by the consumer to contact the desired service provider. The value is stored as a org.apache.cxf.ws.addressing.AddressingProperties . org.apache.cxf.transports.jms.context.JMSConstants JMS_CLIENT_REQUEST_HEADERS Contains the JMS headers for the message. For more information see Section 42.4, "Working with JMS Message Properties" . [a] This property is overridden by the username defined in the HTTP security settings. [b] This property is overridden by the password defined in the HTTP security settings. [c] The Apache CXF ignores this property. 42.4. Working with JMS Message Properties Abstract The Apache CXF JMS transport has a context mechanism that can be used to inspect a JMS message's properties. The context mechanism can also be used to set a JMS message's properties. 42.4.1. Inspecting JMS Message Headers Abstract Consumers and services use different context mechanisms to access the JMS message header properties. However, both mechanisms return the header properties as a org.apache.cxf.transports.jms.context.JMSMessageHeadersType object. Getting the JMS Message Headers in a Service To get the JMS message header properties from the WebServiceContext object, do the following: Obtain the context as described in the section called "Obtaining a context" . Get the message headers from the message context using the message context's get() method with the parameter org.apache.cxf.transports.jms.JMSConstants.JMS_SERVER_HEADERS. Example 42.12, "Getting JMS Message Headers in a Service Implementation" shows code for getting the JMS message headers from a service's message context: Example 42.12. Getting JMS Message Headers in a Service Implementation Getting JMS Message Header Properties in a Consumer Once a message is successfully retrieved from the JMS transport you can inspect the JMS header properties using the consumer's response context. In addition, you can set or check the length of time the client will wait for a response before timing out, as described in the section called "Client Receive Timeout" . To get the JMS message headers from a consumer's response context do the following: Get the response context as described in the section called "Obtaining a context" . Get the JMS message header properties from the response context using the context's get() method with org.apache.cxf.transports.jms.JMSConstants.JMS_CLIENT_RESPONSE_HEADERS as the parameter. Example 42.13, "Getting the JMS Headers from a Consumer Response Header" shows code for getting the JMS message header properties from a consumer's response context. Example 42.13. Getting the JMS Headers from a Consumer Response Header The code in Example 42.13, "Getting the JMS Headers from a Consumer Response Header" does the following: Casts the proxy to a BindingProvider. Gets the response context. Retrieves the JMS message headers from the response context. 42.4.2. Inspecting the Message Header Properties Standard JMS Header Properties Table 42.3, "JMS Header Properties" lists the standard properties in the JMS header that you can inspect. Table 42.3. JMS Header Properties Property Name Property Type Getter Method Correlation ID string getJMSCorralationID() Delivery Mode int getJMSDeliveryMode() Message Expiration long getJMSExpiration() Message ID string getJMSMessageID() Priority int getJMSPriority() Redelivered boolean getJMSRedlivered() Time Stamp long getJMSTimeStamp() Type string getJMSType() Time To Live long getTimeToLive() Optional Header Properties In addition, you can inspect any optional properties stored in the JMS header using JMSMessageHeadersType.getProperty() . The optional properties are returned as a List of org.apache.cxf.transports.jms.context.JMSPropertyType . Optional properties are stored as name/value pairs. Example Example 42.14, "Reading the JMS Header Properties" shows code for inspecting some of the JMS properties using the response context. Example 42.14. Reading the JMS Header Properties The code in Example 42.14, "Reading the JMS Header Properties" does the following: Prints the value of the message's correlation ID. Prints the value of the message's priority property. Prints the value of the message's redelivered property. Gets the list of the message's optional header properties. Gets an Iterator to traverse the list of properties. Iterates through the list of optional properties and prints their name and value. 42.4.3. Setting JMS Properties Abstract Using the request context in a consumer endpoint, you can set a number of the JMS message header properties and the consumer endpoint's timeout value. These properties are valid for a single invocation. You must reset them each time you invoke an operation on the service proxy. Note that you cannot set header properties in a service. JMS Header Properties Table 42.4, "Settable JMS Header Properties" lists the properties in the JMS header that can be set using the consumer endpoint's request context. Table 42.4. Settable JMS Header Properties Property Name Property Type Setter Method Correlation ID string setJMSCorralationID() Delivery Mode int setJMSDeliveryMode() Priority int setJMSPriority() Time To Live long setTimeToLive() To set these properties do the following: Create an org.apache.cxf.transports.jms.context.JMSMessageHeadersType object. Populate the values you want to set using the appropriate setter methods described in Table 42.4, "Settable JMS Header Properties" . Set the values to the request context by calling the request context's put() method using org.apache.cxf.transports.jms.JMSConstants.JMS_CLIENT_REQUEST_HEADERS as the first argument, and the new JMSMessageHeadersType object as the second argument. Optional JMS Header Properties You can also set optional properties to the JMS header. Optional JMS header properties are stored in the JMSMessageHeadersType object that is used to set the other JMS header properties. They are stored as a List object containing org.apache.cxf.transports.jms.context.JMSPropertyType objects. To add optional properties to the JMS header do the following: Create a JMSPropertyType object. Set the property's name field using setName() . Set the property's value field using setValue() . Add the property to the JMS message header using JMSMessageHeadersType.getProperty().add(JMSPropertyType) . Repeat the procedure until all of the properties have been added to the message header. Client Receive Timeout In addition to the JMS header properties, you can set the amount of time a consumer endpoint waits for a response before timing out. You set the value by calling the request context's put() method with org.apache.cxf.transports.jms.JMSConstants.JMS_CLIENT_RECEIVE_TIMEOUT as the first argument and a long representing the amount of time in milliseconds that you want the consumer to wait as the second argument. Example Example 42.15, "Setting JMS Properties using the Request Context" shows code for setting some of the JMS properties using the request context. Example 42.15. Setting JMS Properties using the Request Context The code in Example 42.15, "Setting JMS Properties using the Request Context" does the following: Gets the InvocationHandler for the proxy whose JMS properties you want to change. Checks to see if the InvocationHandler is a BindingProvider . Casts the returned InvocationHandler object into a BindingProvider object to retrieve the request context. Gets the request context. Creates a JMSMessageHeadersType object to hold the new message header values. Sets the Correlation ID. Sets the Expiration property to 60 minutes. Creates a new JMSPropertyType object. Sets the values for the optional property. Adds the optional property to the message header. Sets the JMS message header values into the request context. Sets the client receive timeout property to 1 second. | [
"import javax.xml.ws.*; import javax.xml.ws.handler.*; import javax.annotation.*; @WebServiceProvider public class WidgetServiceImpl { @Resource WebServiceContext wsc; @WebMethod public String getColor(String itemNum) { MessageContext context = wsc.getMessageContext(); } }",
"import javax.xml.ws.handler.MessageContext; import org.apache.cxf.message.Message; // MessageContext context retrieved in a previous example QName wsdl_operation = (QName)context.get(Message.WSDL_OPERATION);",
"import javax.xml.ws.handler.MessageContext; import org.apache.cxf.message.Message; // MessageContext context retrieved in a previous example context.put(Message.RESPONSE_CODE, new Integer(404));",
"// Proxy widgetProxy obtained previously BindingProvider bp = (BindingProvider)widgetProxy; Map<String, Object> requestContext = bp.getRequestContext();",
"// Invoke an operation. port.SomeOperation(); // Read response context property. java.util.Map<String, Object> responseContext = ((javax.xml.ws.BindingProvider)port).getResponseContext(); PropertyType propValue = ( PropertyType ) responseContext.get( ContextPropertyName );",
"// Set request context property. java.util.Map<String, Object> requestContext = ((javax.xml.ws.BindingProvider)port).getRequestContext(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, \"http://localhost:8080/widgets\"); // Invoke an operation. port.SomeOperation();",
"import org.apache.cxf.transport.jms.JMSConstants; import org.apache.cxf.transports.jms.context.JMSMessageHeadersType; @WebService(serviceName = \"HelloWorldService\", portName = \"HelloWorldPort\", endpointInterface = \"org.apache.cxf.hello_world_jms.HelloWorldPortType\", targetNamespace = \"http://cxf.apache.org/hello_world_jms\") public class GreeterImplTwoWayJMS implements HelloWorldPortType { @Resource protected WebServiceContext wsContext; @WebMethod public String greetMe(String me) { MessageContext mc = wsContext.getMessageContext(); JMSMessageHeadersType headers = (JMSMessageHeadersType) mc.get(JMSConstants.JMS_SERVER_HEADERS); } }",
"import org.apache.cxf.transports.jms.context.*; // Proxy greeter initialized previously BindingProvider bp = (BindingProvider)greeter; Map<String, Object> responseContext = bp.getResponseContext(); JMSMessageHeadersType responseHdr = (JMSMessageHeadersType) responseContext.get(JMSConstants.JMS_CLIENT_RESPONSE_HEADERS); }",
"// JMSMessageHeadersType messageHdr retrieved previously System.out.println(\"Correlation ID: \"+messageHdr.getJMSCorrelationID()); System.out.println(\"Message Priority: \"+messageHdr.getJMSPriority()); System.out.println(\"Redelivered: \"+messageHdr.getRedelivered()); JMSPropertyType prop = null; List<JMSPropertyType> optProps = messageHdr.getProperty(); Iterator<JMSPropertyType> iter = optProps.iterator(); while (iter.hasNext()) { prop = iter.next(); System.out.println(\"Property name: \"+prop.getName()); System.out.println(\"Property value: \"+prop.getValue()); }",
"import org.apache.cxf.transports.jms.context.*; // Proxy greeter initialized previously InvocationHandler handler = Proxy.getInvocationHandler(greeter); BindingProvider bp= null; if (handler instanceof BindingProvider) { bp = (BindingProvider)handler; Map<String, Object> requestContext = bp.getRequestContext(); JMSMessageHeadersType requestHdr = new JMSMessageHeadersType(); requestHdr.setJMSCorrelationID(\"WithBob\"); requestHdr.setJMSExpiration(3600000L); JMSPropertyType prop = new JMSPropertyType; prop.setName(\"MyProperty\"); prop.setValue(\"Bluebird\"); requestHdr.getProperty().add(prop); requestContext.put(JMSConstants.CLIENT_REQUEST_HEADERS, requestHdr); requestContext.put(JMSConstants.CLIENT_RECEIVE_TIMEOUT, new Long(1000)); }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwscontexts |
Chapter 5. Deleting a model registry | Chapter 5. Deleting a model registry You can delete model registries that you no longer require. Important When you delete a model registry, databases connected to the model registry will not be removed. To remove any remaining databases, contact your cluster administrator. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. An available model registry exists in your deployment. Procedure From the OpenShift AI dashboard, click Settings Model registry settings . Click the action menu ( ... ) beside the model registry that you want to delete. Click Delete model registry . In the Delete model registry? dialog that appears, enter the name of the model registry in the text field to confirm that you intend to delete it. Click Delete model registry . Verification The model registry no longer appears on the Model Registry Settings page. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_model_registries/deleting-a-model-registry_managing-model-registries |
Chapter 7. EndpointSlice [discovery.k8s.io/v1] | Chapter 7. EndpointSlice [discovery.k8s.io/v1] Description EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints. Type object Required addressType endpoints 7.1. Specification Property Type Description addressType string addressType specifies the type of address carried by this EndpointSlice. All addresses in this slice must be the same type. This field is immutable after creation. The following address types are currently supported: * IPv4: Represents an IPv4 Address. * IPv6: Represents an IPv6 Address. * FQDN: Represents a Fully Qualified Domain Name. Possible enum values: - "FQDN" represents a FQDN. - "IPv4" represents an IPv4 Address. - "IPv6" represents an IPv6 Address. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources endpoints array endpoints is a list of unique endpoints in this slice. Each slice may include a maximum of 1000 endpoints. endpoints[] object Endpoint represents a single logical "backend" implementing a service. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. ports array ports specifies the list of network ports exposed by each endpoint in this slice. Each port must have a unique name. When ports is empty, it indicates that there are no defined ports. When a port is defined with a nil port value, it indicates "all ports". Each slice may include a maximum of 100 ports. ports[] object EndpointPort represents a Port used by an EndpointSlice 7.1.1. .endpoints Description endpoints is a list of unique endpoints in this slice. Each slice may include a maximum of 1000 endpoints. Type array 7.1.2. .endpoints[] Description Endpoint represents a single logical "backend" implementing a service. Type object Required addresses Property Type Description addresses array (string) addresses of this endpoint. The contents of this field are interpreted according to the corresponding EndpointSlice addressType field. Consumers must handle different types of addresses in the context of their own capabilities. This must contain at least one address but no more than 100. These are all assumed to be fungible and clients may choose to only use the first element. Refer to: https://issue.k8s.io/106267 conditions object EndpointConditions represents the current condition of an endpoint. deprecatedTopology object (string) deprecatedTopology contains topology information part of the v1beta1 API. This field is deprecated, and will be removed when the v1beta1 API is removed (no sooner than kubernetes v1.24). While this field can hold values, it is not writable through the v1 API, and any attempts to write to it will be silently ignored. Topology information can be found in the zone and nodeName fields instead. hints object EndpointHints provides hints describing how an endpoint should be consumed. hostname string hostname of this endpoint. This field may be used by consumers of endpoints to distinguish endpoints from each other (e.g. in DNS names). Multiple endpoints which use the same hostname should be considered fungible (e.g. multiple A values in DNS). Must be lowercase and pass DNS Label (RFC 1123) validation. nodeName string nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node. targetRef ObjectReference targetRef is a reference to a Kubernetes object that represents this endpoint. zone string zone is the name of the Zone this endpoint exists in. 7.1.3. .endpoints[].conditions Description EndpointConditions represents the current condition of an endpoint. Type object Property Type Description ready boolean ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be "true" for terminating endpoints. serving boolean serving is identical to ready except that it is set regardless of the terminating state of endpoints. This condition should be set to true for a ready endpoint that is terminating. If nil, consumers should defer to the ready condition. This field can be enabled with the EndpointSliceTerminatingCondition feature gate. terminating boolean terminating indicates that this endpoint is terminating. A nil value indicates an unknown state. Consumers should interpret this unknown state to mean that the endpoint is not terminating. This field can be enabled with the EndpointSliceTerminatingCondition feature gate. 7.1.4. .endpoints[].hints Description EndpointHints provides hints describing how an endpoint should be consumed. Type object Property Type Description forZones array forZones indicates the zone(s) this endpoint should be consumed by to enable topology aware routing. forZones[] object ForZone provides information about which zones should consume this endpoint. 7.1.5. .endpoints[].hints.forZones Description forZones indicates the zone(s) this endpoint should be consumed by to enable topology aware routing. Type array 7.1.6. .endpoints[].hints.forZones[] Description ForZone provides information about which zones should consume this endpoint. Type object Required name Property Type Description name string name represents the name of the zone. 7.1.7. .ports Description ports specifies the list of network ports exposed by each endpoint in this slice. Each port must have a unique name. When ports is empty, it indicates that there are no defined ports. When a port is defined with a nil port value, it indicates "all ports". Each slice may include a maximum of 100 ports. Type array 7.1.8. .ports[] Description EndpointPort represents a Port used by an EndpointSlice Type object Property Type Description appProtocol string The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port. All ports in an EndpointSlice must have a unique name. If the EndpointSlice is dervied from a Kubernetes service, this corresponds to the Service.ports[].name. Name must either be an empty string or pass DNS_LABEL validation: * must be no more than 63 characters long. * must consist of lower case alphanumeric characters or '-'. * must start and end with an alphanumeric character. Default is empty string. port integer The port number of the endpoint. If this is not specified, ports are not restricted and must be interpreted in the context of the specific consumer. protocol string The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP. 7.2. API endpoints The following API endpoints are available: /apis/discovery.k8s.io/v1/endpointslices GET : list or watch objects of kind EndpointSlice /apis/discovery.k8s.io/v1/watch/endpointslices GET : watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices DELETE : delete collection of EndpointSlice GET : list or watch objects of kind EndpointSlice POST : create an EndpointSlice /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices GET : watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices/{name} DELETE : delete an EndpointSlice GET : read the specified EndpointSlice PATCH : partially update the specified EndpointSlice PUT : replace the specified EndpointSlice /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices/{name} GET : watch changes to an object of kind EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/discovery.k8s.io/v1/endpointslices Table 7.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind EndpointSlice Table 7.2. HTTP responses HTTP code Reponse body 200 - OK EndpointSliceList schema 401 - Unauthorized Empty 7.2.2. /apis/discovery.k8s.io/v1/watch/endpointslices Table 7.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. Table 7.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices Table 7.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EndpointSlice Table 7.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.8. Body parameters Parameter Type Description body DeleteOptions schema Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind EndpointSlice Table 7.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK EndpointSliceList schema 401 - Unauthorized Empty HTTP method POST Description create an EndpointSlice Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body EndpointSlice schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 202 - Accepted EndpointSlice schema 401 - Unauthorized Empty 7.2.4. /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices Table 7.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. Table 7.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices/{name} Table 7.18. Global path parameters Parameter Type Description name string name of the EndpointSlice namespace string object name and auth scope, such as for teams and projects Table 7.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EndpointSlice Table 7.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.21. Body parameters Parameter Type Description body DeleteOptions schema Table 7.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EndpointSlice Table 7.23. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EndpointSlice Table 7.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.25. Body parameters Parameter Type Description body Patch schema Table 7.26. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EndpointSlice Table 7.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.28. Body parameters Parameter Type Description body EndpointSlice schema Table 7.29. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 401 - Unauthorized Empty 7.2.6. /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices/{name} Table 7.30. Global path parameters Parameter Type Description name string name of the EndpointSlice namespace string object name and auth scope, such as for teams and projects Table 7.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/endpointslice-discovery-k8s-io-v1 |
Chapter 3. Applying patch updates and minor release upgrades to Red Hat Process Automation Manager | Chapter 3. Applying patch updates and minor release upgrades to Red Hat Process Automation Manager Automated update tools are often provided with both patch updates and new minor versions of Red Hat Process Automation Manager to facilitate updating certain components of Red Hat Process Automation Manager, such as Business Central, KIE Server, and the headless Process Automation Manager controller. Other Red Hat Process Automation Manager artifacts, such as the decision engine and standalone Business Central, are released as new artifacts with each minor release and you must reinstall them to apply the update. You can use the same automated update tool to apply both patch updates and minor release upgrades to Red Hat Process Automation Manager 7.13. Patch updates of Red Hat Process Automation Manager, such as an update from version 7.13 to 7.13.5, include the latest security updates and bug fixes. Minor release upgrades of Red Hat Process Automation Manager, such as an upgrade from version 7.12.x to 7.13, include enhancements, security updates, and bug fixes. Note Only updates for Red Hat Process Automation Manager are included in Red Hat Process Automation Manager update tools. Updates to Red Hat JBoss EAP must be applied using Red Hat JBoss EAP patch distributions. For more information about Red Hat JBoss EAP patching, see the Red Hat JBoss EAP patching and upgrading guide . Prerequisites Your Red Hat Process Automation Manager and KIE Server instances are not running. Do not apply updates while you are running an instance of Red Hat Process Automation Manager or KIE Server. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options. If you are upgrading to a new minor release of Red Hat Process Automation Manager, such as an upgrade from version 7.12.x to 7.13, first apply the latest patch update to your current version of Red Hat Process Automation Manager and then follow this procedure again to upgrade to the new minor release. Click Patches , download the Red Hat Process Automation Manager [VERSION] Update Tool , and extract the downloaded rhpam-USDVERSION-update.zip file to a temporary directory. This update tool automates the update of certain components of Red Hat Process Automation Manager, such as Business Central, KIE Server, and the headless Process Automation Manager controller. Use this update tool first to apply updates and then install any other updates or new release artifacts that are relevant to your Red Hat Process Automation Manager distribution. If you want to preserve any files from being updated by the update tool, navigate to the extracted rhpam-USDVERSION-update folder, open the blacklist.txt file, and add the relative paths to the files that you do not want to be updated. When a file is listed in the blacklist.txt file, the update script does not replace the file with the new version but instead leaves the file in place and in the same location adds the new version with a .new suffix. If you block files that are no longer being distributed, the update tool creates an empty marker file with a .removed suffix. You can then choose to retain, merge, or delete these new files manually. Example files to be excluded in blacklist.txt file: The contents of the blocked file directories after the update: In your command terminal, navigate to the temporary directory where you extracted the rhpam-USDVERSION-update.zip file and run the apply-updates script in the following format: Important Make sure that your Red Hat Process Automation Manager and KIE Server instances are not running before you apply updates. Do not apply updates while you are running an instance of Red Hat Process Automation Manager or KIE Server. On Linux or Unix-based systems: On Windows: The USDDISTRO_PATH portion is the path to the relevant distribution directory and the USDDISTRO_TYPE portion is the type of distribution that you are updating with this update. The following distribution types are supported in Red Hat Process Automation Manager update tool: rhpam-business-central-eap7-deployable : Updates Business Central ( business-central.war ) rhpam-kie-server-ee8 : Updates KIE Server ( kie-server.war ) Note The update tool will update and replace Red Hat JBoss EAP EE7 to Red Hat JBoss EAP EE8. Red Hat JBoss EAP EE7 is used for WebLogic and WebSphere, whereas version EE8 is used for Red Hat JBoss EAP. Make sure that KIE Server on WebLogic and WebSphere is not updated by the update tool. rhpam-kie-server-jws : Updates KIE Server on Red Hat JBoss Web Server ( kie-server.war ) rhpam-controller-ee7 : Updates the headless Process Automation Manager controller ( controller.war ) rhpam-controller-jws : Updates the headless Process Automation Manager controller on Red Hat JBoss Web Server ( controller.war ) Example update to Business Central and KIE Server for a full Red Hat Process Automation Manager distribution on Red Hat JBoss EAP: Example update to headless Process Automation Manager controller, if used: The update script creates a backup folder in the extracted rhpam-USDVERSION-update folder with a copy of the specified distribution, and then proceeds with the update. After the update tool completes, return to the Software Downloads page of the Red Hat Customer Portal where you downloaded the update tool and install any other updates or new release artifacts that are relevant to your Red Hat Process Automation Manager distribution. For files that already exist in your Red Hat Process Automation Manager distribution, such as .jar files for the decision engine or other add-ons, replace the existing version of the file with the new version from the Red Hat Customer Portal. If you use the standalone Red Hat Process Automation Manager 7.13.5 Maven Repository artifact ( rhpam-7.13.5-maven-repository.zip ), such as in air-gap environments, download Red Hat Process Automation Manager 7.13.5 Maven Repository and extract the downloaded rhpam-7.13.5-maven-repository.zip file to your existing ~/maven-repository directory to update the relevant contents. Example Maven repository update: Note You can remove the /tmp/rhbaMavenRepoUpdate folder after you complete the update. Optional: If you are changing Red Hat Process Automation Manager from using property-based user storage to file-based user storage, complete the following steps: Navigate to the USDJBOSS_HOME directory and run one of the following commands: On Linux or Unix-based systems: On Windows: Run the following command: On Linux or Unix-based systems: On Windows: Navigate to the directory where you extracted the rhpam-USDVERSION-update.zip file and run one of the following commands to apply the kie-fs-realm patch: On Linux or Unix-based systems: On Windows: After you finish applying all relevant updates, start Red Hat Process Automation Manager and KIE Server and log in to Business Central. Verify that all project data is present and accurate in Business Central, and in the top-right corner of the Business Central window, click your profile name and click About to verify the updated product version number. If you encounter errors or notice any missing data in Business Central, you can restore the contents in the backup folder within the rhpam-USDVERSION-update folder to revert the update tool changes. You can also reinstall the relevant release artifacts from your version of Red Hat Process Automation Manager in the Red Hat Customer Portal. After restoring your distribution, you can try again to run the update. | [
"WEB-INF/web.xml // Custom file styles/base.css // Obsolete custom file kept for record",
"ls WEB-INF web.xml web.xml.new",
"ls styles base.css base.css.removed",
"./apply-updates.sh USDDISTRO_PATH USDDISTRO_TYPE",
".\\apply-updates.bat USDDISTRO_PATH USDDISTRO_TYPE",
"./apply-updates.sh ~EAP_HOME/standalone/deployments/business-central.war rhpam-business-central-eap7-deployable ./apply-updates.sh ~EAP_HOME/standalone/deployments/kie-server.war rhpam-kie-server-ee8",
"./apply-updates.sh ~EAP_HOME/standalone/deployments/controller.war rhpam-controller-ee7",
"unzip -o rhpam-7.13.5-maven-repository.zip 'rhba-7.13.5.GA-maven-repository/maven-repository/*' -d /tmp/rhbaMavenRepoUpdate mv /tmp/rhbaMavenRepoUpdate/rhba-7.13.5.GA-maven-repository/maven-repository/ USDREPO_PATH/",
"./bin/standalone.sh --admin-only -c standalone-full.xml",
"./bin/jboss-cli.sh --connect --file=rhpam-USDVERSION-update/elytron/add-kie-fs-realm.cli",
"./bin/standalone.bat --admin-only -c standalone-full.xml",
"./bin/jboss-cli.bat --connect --file=rhpam-USDVERSION-update/elytron/add-kie-fs-realm.cli",
"./bin/elytron-tool.sh filesystem-realm --users-file standalone/configuration/application-users.properties --roles-file standalone/configuration/application-roles.properties --output-location standalone/configuration/kie-fs-realm-users --filesystem-realm-name kie-fs-realm-users",
"./bin/elytron-tool.bat filesystem-realm --users-file standalone/configuration/application-users.properties --roles-file standalone/configuration/application-roles.properties --output-location standalone/configuration/kie-fs-realm-users --filesystem-realm-name kie-fs-realm-users",
"./elytron/kie-fs-realm-patch.sh ~/USDJBOSS_HOME/standalone/configuration/kie-fs-realm-users/",
"./elytron/kie-fs-realm-patch.bat ~/USDJBOSS_HOME/standalone/configuration/kie-fs-realm-users/"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/patches-applying-proc_execution-server |
Chapter 1. Introduction to Red Hat OpenStack certification program | Chapter 1. Introduction to Red Hat OpenStack certification program Use this guide to certify your hardware, software, and applications relying on OpenStack services or APIs. 1.1. The Red Hat certification program overview The Red Hat certification program ensures the compatibility of your hardware, software, and cloud products on the OpenStack Platform . The program has three main elements: Test suite : Comprises tests for hardware or software applications undergoing certification. Red Hat Certification Ecosystem : Helps to explore and find certified products including hardware, software, cloud, and service providers. Support : A joint support relationship between you and Red Hat. This table summarizes the basic differences between a product listing and components: Product listing Component (Project) Includes detailed information about your product. The individual containers, operators, helm charts, and infrastructure services that you test, certify, and then add to the product listing. Products are composed of one or more components. Components are added to a product listing. You add components to a product for proceeding with certification. A component can be used in multiple products by adding it to each product listing. A product can not be published without certified components. Certified components are published as part of a product listing. 1.2. Certification workflow Follow these high-level steps to certify your hardware, software, and cloud products: Note Red Hat recommends that you are a Red Hat Certified Engineer or hold equivalent experience before starting the certification process. Task Summary The certification workflow includes three primary stages - Section 1.2.1, "Certification onboarding" Section 1.2.2, "Certification testing" Section 1.2.3, "Publishing the certified application" 1.2.1. Certification onboarding Perform the steps outlined for certification onboarding: Join the Red Hat Connect for Technology Partner Program. Agree to the program terms and conditions. Create your product listing by selecting your desired product category. You can select from the available product categories: Containerized Application Standalone Application OpenStack Infrastructure Complete your company profile. Add components to the product listing. Certify components for your product listing. 1.2.2. Certification testing Follow these high-level steps to run a certification test: Log in to the Red Hat Certification portal . Download the test plan. Configure the system under test (SUT) for running the tests. Download the test plan to our SUT. Run the certification tests on your system. Review and upload the test results to the certification portal. 1.2.3. Publishing the certified application When you complete all the certification checks successfully, you can submit the test results to Red Hat. Upon successful validation, you can publish your product on the Red Hat Ecosystem Catalog . Additional resources For more information about the requirements and policies for Red Hat OpenStack Certification, see Red Hat OpenStack Certification Policy Guide . 1.3. Getting support and giving feedback For any questions related to the Red Hat certification toolset, certification process, or procedure described in this documentation, refer to the KB Articles , Red Hat Customer Portal , and Red Hat Partner Connect . You can also open a support case to get support or submit feedback. To open a support case see, How do I open and manage a support case on the Customer Portal? Questions During Certification If you have any questions or responses about a specific certification, record them in the Comments section of the Dialog Tab of the certification entry. Warning Issues that can block a certification and might require resolution must be resolved through your Engineering Partner Manager or other engineering engagements. | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/assembly-introduction-to-openstack-certification_rhosp-policy-guide |
B.4. CRL Extensions | B.4. CRL Extensions B.4.1. About CRL Extensions Since its initial publication, the X.509 standard for CRL formats has been amended to include additional information within a CRL. This information is added through CRL extensions. The extensions defined by ANSI X9 and ISO/IEC/ITU for X.509 CRLs [X.509] [X9.55] allow additional attributes to be associated with CRLs. The Internet X.509 Public Key Infrastructure Certificate and CRL Profile , available at RFC 5280 , recommends a set of extensions to be used in CRLs. These extensions are called standard CRL extensions . The standard also allows custom extensions to be created and included in CRLs. These extensions are called private , proprietary , or custom CRL extensions and carry information unique to an organization or business. Applications may not able to validate CRLs that contain private critical extensions, so it is not recommended that custom extensions be used in a general context. Note Abstract Syntax Notation One (ASN.1) and Distinguished Encoding Rules (DER) standards are specified in the CCITT Recommendations X.208 and X.209. For a quick summary of ASN.1 and DER, see A Layman's Guide to a Subset of ASN.1, BER, and DER , which is available at RSA Laboratories' web site, http://www.rsa.com . B.4.1.1. Structure of CRL Extensions A CRL extension consists of the following parts: The object identifier (OID) for the extension. This identifier uniquely identifies the extension. It also determines the ASN.1 type of value in the value field and how the value is interpreted. When an extension appears in a CRL, the OID appears as the extension ID field ( extnID ) and the corresponding ASN.1 encoded structure appears as the value of the octet string ( extnValue ); examples are shown in Example B.4, "Sample Pretty-Print Certificate Extensions" . A flag or Boolean field called critical . The true or false value assigned to this field indicates whether the extension is critical or noncritical to the CRL. If the extension is critical and the CRL is sent to an application that does not understand the extension based on the extension's ID, the application must reject the CRL. If the extension is not critical and the CRL is sent to an application that does not understand the extension based on the extension's ID, the application can ignore the extension and accept the CRL. An octet string containing the DER encoding of the value of the extension. The application receiving the CRL checks the extension ID to determine if it can recognize the ID. If it can, it uses the extension ID to determine the type of value used. B.4.1.2. Sample CRL and CRL Entry Extensions The following is an example of an X.509 CRL version 2 extension. The Certificate System can display CRLs in readable pretty-print format, as shown here. As shown in the example, CRL extensions appear in sequence and only one instance of a particular extension may appear per CRL; for example, a CRL may contain only one Authority Key Identifier extension. However, CRL-entry extensions appear in appropriate entries in the CRL. A delta CRL is a subset of the CRL which contains only the changes since the last CRL was published. Any CRL which contains the delta CRL indicator extension is a delta CRL. B.4.2. Standard X.509 v3 CRL Extensions Reference In addition to certificate extensions, the X.509 proposed standard defines extensions to CRLs, which provide methods for associating additional attributes with Internet CRLs. These are one of two kinds: extensions to the CRL itself and extensions to individual certificate entries in the CRL. Section B.4.2.1, "Extensions for CRLs" Section B.4.2.2, "CRL Entry Extensions" B.4.2.1. Extensions for CRLs The following CRL descriptions are defined as part of the Internet X.509 v3 Public Key Infrastructure proposed standard. Section B.4.2.1.1, "authorityInfoAccess" Section B.4.2.1.2, "authorityKeyIdentifier" Section B.4.2.1.3, "CRLNumber" Section B.4.2.1.4, "deltaCRLIndicator" Section B.4.2.1.5, "FreshestCRL" Section B.4.2.1.6, "issuerAltName" Section B.4.2.1.7, "issuingDistributionPoint" Section B.4.2.1.5, "FreshestCRL" B.4.2.1.1. authorityInfoAccess The Authority Information Access extension identifies how delta CRL information is obtained. The freshestCRL extension is placed in the full CRL to indicate where to find the latest delta CRL. OID 1.3.6.1.5.5.7.1.1 Criticality PKIX requires that this extension must not be critical. Parameters Table B.39. Authority Infomation Access Configuration Parameters Parameter Description enable Specifies whether the rule is enabled or disabled. The default is to have this extension disabled. critical Sets whether the extension is marked as critical; the default is noncritical. numberOfAccessDescriptions Indicates the number of access descriptions, from 0 to any positive integer; the default is 0. When setting this parameter to an integer other than 0, set the number, and then click OK to close the window. Re-open the edit window for the rule, and the fields to set the points will be present. accessMethod n The only accepted value for this parameter is caIssuers. The caIssuers method is used when the information available lists certificates that can be used to verify the signature on the CRL. No other method should be used when the AIA extension is included in a CRL. accessLocationType n Specifies the type of access location for the n access description. The options are either DirectoryName or URI . accessLocation n If accessLocationType is set to DirectoryName , the value must be a string in the form of an X.500 name, similar to the subject name in a certificate. For example, CN=CACentral,OU=Research Dept,O=Example Corporation,C=US . If accessLocationType is set to URI , the name must be a URI; the URI must be an absolute pathname and must specify the host. For example, http://testCA.example.com/get/crls/here/ . B.4.2.1.2. authorityKeyIdentifier The Authority Key Identifier extension for a CRL identifies the public key corresponding to the private key used to sign the CRL. For details, see the discussion under certificate extensions at Section B.3.2, "authorityKeyIdentifier" . The PKIX standard recommends that the CA must include this extension in all CRLs it issues because a CA's public key can change, for example, when the key gets updated, or the CA may have multiple signing keys because of multiple concurrent key pairs or key changeover. In these cases, the CA ends up with more than one key pair. When verifying a signature on a certificate, other applications need to know which key was used in the signature. OID 2.5.29.35 Parameters Table B.40. AuthorityKeyIdentifierExt Configuration Parameters Parameter Description enable Specifies whether the rule is enabled or disabled. The default is to have this extension disabled. critical Sets whether the extension is marked as critical; the default is noncritical. B.4.2.1.3. CRLNumber The CRLNumber extension specifies a sequential number for each CRL issued by a CA. It allows users to easily determine when a particular CRL supersedes another CRL. PKIX requires that all CRLs have this extension. OID 2.5.29.20 Criticality This extension must not be critical. Parameters Table B.41. CRLNumber Configuration Parameters Parameter Description enable Specifies whether the rule is enabled, which is the default. critical Sets whether the extension is marked as critical; the default is noncritical. B.4.2.1.4. deltaCRLIndicator The deltaCRLIndicator extension generates a delta CRL, a list only of certificates that have been revoked since the last CRL; it also includes a reference to the base CRL. This updates the local database while ignoring unchanged information already in the local database. This can significantly improve processing time for applications that store revocation information in a format other than the CRL structure. OID 2.5.29.27 Criticality PKIX requires that this extension be critical if it exists. Parameters Table B.42. DeltaCRL Configuration Parameters Parameter Description enable Sets whether the rule is enabled. By default, it is disabled. critical Sets whether the extension is critical or noncritical. By default, this is critical. B.4.2.1.5. FreshestCRL The freshestCRL extension identifies how delta CRL information is obtained. The freshestCRL extension is placed in the full CRL to indicate where to find the latest delta CRL. OID 2.5.29.46 Criticality PKIX requires that this extension must be noncritical. Parameters Table B.43. FreshestCRL Configuration Parameters Parameter Description enable Sets whether the extension rule is enabled. By default, this is disabled. critical Marks the extension as critical or noncritical. The default is noncritical. numPoints Indicates the number of issuing points for the delta CRL, from 0 to any positive integer; the default is 0 . When setting this to an integer other than 0, set the number, and then click OK to close the window. Re-open the edit window for the rule, and the fields to set these points will be present. pointType n Specifies the type of issuing point for the n issuing point. For each number specified in numPoints , there is an equal number of pointType parameters. The options are either DirectoryName or URIName . pointName n If pointType is set to directoryName , the value must be a string in the form of an X.500 name, similar to the subject name in a certificate. For example, CN=CACentral,OU=Research Dept,O=Example Corporation,C=US . If pointType is set to URIName , the name must be a URI; the URI must be an absolute pathname and must specify the host. For example, http://testCA.example.com/get/crls/here/ . B.4.2.1.6. issuerAltName The Issuer Alternative Name extension allows additional identities to be associated with the issuer of the CRL, like binding attributes such as a mail address, a DNS name, an IP address (both IPv4 and IPv6), and a uniform resource indicator (URI), with the issuer of the CRL. For details, see the discussion under certificate extensions at Section B.3.7, "issuerAltName Extension" . OID 2.5.29.18 Parameters Table B.44. IssuerAlternativeName Configuration Parameters Parameter Description enable Sets whether the extension rule is enabled; by default, this is disabled. critical Sets whether the extension is critical; by default, this is noncritical. numNames Sets the total number of alternative names or identities permitted in the extension. Each name has a set of configuration parameters, nameType and name , which must have appropriate values or the rule returns an error. Change the total number of identities by changing the value specified in this field; there is no limit on the total number of identities that can be included in the extension. Each set of configuration parameters is distinguished by an integer derived from the value of this field. For example, if the numNames parameter is set to 2 , the derived integers are 0 and 1 . nameType n Specifies the general-name type; this can be any of the following: rfc822Name if the name is an Internet mail address. directoryName if the name is an X.500 directory name. dNSName if the name is a DNS name. ediPartyName if the name is a EDI party name. URL if the name is a URI (default). iPAddress if the name is an IP address. An IPv4 address must be in the format n.n.n.n or n.n.n.n,m.m.m.m . For example, 128.21.39.40 or 128.21.39.40,255.255.255.00 . An IPv6 address uses a 128-bit namespace, with the IPv6 address separated by colons and the netmask separated by periods. For example, 0:0:0:0:0:0:13.1.68.3 , FF01::43 , 0:0:0:0:0:0:13.1.68.3,FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:255.255.255.0 , and FF01::43,FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FF00:0000 . OID if the name is an object identifier. otherName if the name is in any other name form; this supports PrintableString , IA5String , UTF8String , BMPString , Any , and KerberosName . name n Specifies the general-name value; the allowed values depend on the name type specified in the nameType field. For rfc822Name , the value must be a valid Internet mail address in the local-part@domain format. For directoryName , the value must be a string X.500 name, similar to the subject name in a certificate. For example, CN=CACentral,OU=Research Dept,O=Example Corporation,C=US . For dNSName , the value must be a valid domain name in the DNS format. For example, testCA.example.com . For ediPartyName , the name must be an IA5String. For example, Example Corporation . For URL , the value must be a non-relative URI. For example, http://testCA.example.com . For iPAddress , the value must be a valid IP address specified in dot-separated numeric component notation. It can be the IP address or the IP address including the netmask. An IPv4 address must be in the format n.n.n.n or n.n.n.n,m.m.m.m . For example, 128.21.39.40 or 128.21.39.40,255.255.255.00 . An IPv6 address uses a 128-bit namespace, with the IPv6 address separated by colons and the netmask separated by periods. For example, 0:0:0:0:0:0:13.1.68.3 , FF01::43 , 0:0:0:0:0:0:13.1.68.3,FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:255.255.255.0 , and FF01::43,FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FF00:0000 . For OID , the value must be a unique, valid OID specified in the dot-separated numeric component notation. For example, 1.2.3.4.55.6.5.99 . Although custom OIDs can be used to evaluate and test the server, in a production environment, comply with the ISO rules for defining OIDs and for registering subtrees of IDs. For otherName , the names can be any other format; this supports PrintableString , IA5String , UTF8String , BMPString , Any , and KerberosName . PrintableString , IA5String , UTF8String , BMPString , and Any set a string to a base-64 encoded file specifying the subtree, such as /var/lib/pki/pki-tomcat/ca/othername.txt . KerberosName has the format Realm|NameType|NameStrings , such as realm1|0|userID1,userID2 . The name must be the absolute path to the file that contains the general name in its base-64 encoded format. For example, /var/lib/pki/pki-tomcat/ca/extn/ian/othername.txt . B.4.2.1.7. issuingDistributionPoint The Issuing Distribution Point CRL extension identifies the CRL distribution point for a particular CRL and indicates what kinds of revocation it covers, such as revocation of end-entity certificates only, CA certificates only, or revoked certificates that have a limited set of reason codes. PKIX Part I does not require this extension. OID 2.5.29.28 Criticality PKIX requires that this extension be critical if it exists. Parameters Table B.45. IssuingDistributionPoint Configuration Parameters Parameter Description enable Sets whether the extension is enabled; the default is disabled. critical Marks the extension as critical, the default, or noncritical. pointType Specifies the type of the issuing distribution point from the following: directoryName specifies that the type is an X.500 directory name. URI specifies that the type is a uniform resource indicator. RelativeToIssuer specifies that the type is a relative distinguished name (RDN), which represents a single node of a DN, such as ou=Engineering . pointName Gives the name of the issuing distribution point. The name of the distribution point depends on the value specified for the pointType parameter. For directoryName , the name must be an X.500 name. For example, cn=CRLCentral,ou=Research Dept,o=Example Corporation,c=US . For URIName , the name must be a URI that is an absolute pathname and specifies the host. For example, http://testCA.example.com/get/crls/here/ . Note The CRL may be stored in the directory entry corresponding to the CRL issuing point, which may be different than the directory entry of the CA. onlySomeReasons Specifies the reason codes associated with the distribution point. Permissible values are a combination of reason codes ( unspecified , keyCompromise , cACompromise , affiliationChanged , superseded , cessationOfOperation , certificateHold , and removeFromCRL ) separated by commas. Leave the field blank if the distribution point contains revoked certificates with all reason codes (default). onlyContainsCACerts Specifies that the distribution point contains user certificates only if set. By default, this is not set, which means the distribution point contains all types of certificates. indirectCRL Specifies that the distribution point contains an indirect CRL; by default, this is not selected. B.4.2.2. CRL Entry Extensions The sections that follow lists the CRL entry extension types that are defined as part of the Internet X.509 v3 Public Key Infrastructure proposed standard. All of these extensions are noncritical. B.4.2.2.1. certificateIssuer The Certificate Issuer extension identifies the certificate issuer associated with an entry in an indirect CRL. This extension is used only with indirect CRLs, which are not supported by the Certificate System. OID 2.5.29.29 B.4.2.2.2. invalidityDate The Invalidity Date extension provides the date on which the private key was compromised or that the certificate otherwise became invalid. OID 2.5.29.24 Parameters Table B.46. InvalidityDate Configuration Parameters Parameter Description enable Sets whether the extension rule is enabled or disabled. By default, this is enabled. critical Marks the extension as critical or noncritical; by default, this is noncritical. B.4.2.2.3. CRLReason The Reason Code extension identifies the reason for certificate revocation. OID 2.5.29.21 Parameters Table B.47. CRLReason Configuration Parameters Parameter Description enable Sets whether the extension rule is enabled or disabled. By default, this is enabled. critical Marks the extension as critical or noncritical. By default, this is noncritical. B.4.3. Netscape-Defined Certificate Extensions Reference Netscape defined certain certificate extensions for its products. Some of the extensions are now obsolete, and others have been superseded by the extensions defined in the X.509 proposed standard. All Netscape extensions should be tagged as noncritical, so that their presence in a certificate does not make that certificate incompatible with other clients. B.4.3.1. netscape-cert-type The Netscape Certificate Type extension can be used to limit the purposes for which a certificate can be used. It has been replaced by the X.509 v3 extensions Section B.3.6, "extKeyUsage" and Section B.3.3, "basicConstraints" . If the extension exists in a certificate, it limits the certificate to the uses specified in it. If the extension is not present, the certificate can be used for all applications, except for object signing. The value is a bit-string, where the individual bit positions, when set, certify the certificate for particular uses as follows: bit 0: SSL Client certificate bit 1: SSL Server certificate bit 2: S/MIME certificate bit 3: Object Signing certificate bit 4: reserved bit 5: SSL CA certificate bit 6: S/MIME CA certificate bit 7: Object Signing CA certificate OID 2.16.840.1.113730.1.1 B.4.3.2. netscape-comment The value of this extension is an IA5String. It is a comment that can be displayed to the user when the certificate is viewed. OID 2.16.840.1.113730.13 | [
"Certificate Revocation List: Data: Version: v2 Signature Algorithm: SHA1withRSA - 1.2.840.113549.1.1.5 Issuer: CN=Certificate Authority,O=Example Domain This Update: Wednesday, July 29, 2009 8:59:48 AM GMT-08:00 Next Update: Friday, July 31, 2009 8:59:48 AM GMT-08:00 Revoked Certificates: 1-3 of 3 Serial Number: 0x11 Revocation Date: Thursday, July 23, 2009 10:07:15 AM GMT-08:00 Extensions: Identifier: Revocation Reason - 2.5.29.21 Critical: no Reason: Privilege_Withdrawn Serial Number: 0x1A Revocation Date: Wednesday, July 29, 2009 8:50:11 AM GMT-08:00 Extensions: Identifier: Revocation Reason - 2.5.29.21 Critical: no Reason: Certificate_Hold Identifier: Invalidity Date - 2.5.29.24 Critical: no Invalidity Date: Sun Jul 26 23:00:00 GMT-08:00 2009 Serial Number: 0x19 Revocation Date: Wednesday, July 29, 2009 8:50:49 AM GMT-08:00 Extensions: Identifier: Revocation Reason - 2.5.29.21 Critical: no Reason: Key_Compromise Identifier: Invalidity Date - 2.5.29.24 Critical: no Invalidity Date: Fri Jul 24 23:00:00 GMT-08:00 2009 Extensions: Identifier: Authority Info Access: - 1.3.6.1.5.5.7.1.1 Critical: no Access Description: Method #0: ocsp Location #0: URIName: http://example.com:9180/ca/ocsp Identifier: Issuer Alternative Name - 2.5.29.18 Critical: no Issuer Names: DNSName: example.com Identifier: Authority Key Identifier - 2.5.29.35 Critical: no Key Identifier: 50:52:0C:AA:22:AC:8A:71:E3:91:0C:C5:77:21:46:9C: 0F:F8:30:60 Identifier: Freshest CRL - 2.5.29.46 Critical: no Number of Points: 1 Point 0 Distribution Point: [URIName: http://server.example.com:8443/ca/ee/ca/getCRL?op=getDeltaCRL&crlIssuingPoint=MasterCRL] Identifier: CRL Number - 2.5.29.20 Critical: no Number: 39 Identifier: Issuing Distribution Point - 2.5.29.28 Critical: yes Distribution Point: Full Name: URIName: http://example.com:9180/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL Only Contains User Certificates: no Only Contains CA Certificates: no Indirect CRL: no Signature: Algorithm: SHA1withRSA - 1.2.840.113549.1.1.5 Signature: 47:D2:CD:C9:E5:F5:9D:56:0A:97:31:F5:D5:F2:51:EB: 1F:CF:FA:9E:63:D4:80:13:85:E5:D8:27:F0:69:67:B5: 89:4F:59:5E:69:E4:39:93:61:F2:E3:83:51:0B:68:26: CD:99:C4:A2:6C:2B:06:43:35:36:38:07:34:E4:93:80: 99:2F:79:FB:76:E8:3D:4C:15:5A:79:4E:E5:3F:7E:FC: D8:78:0D:1D:59:A0:4C:14:42:B7:22:92:89:38:3A:4C: 4A:3A:06:DE:13:74:0E:E9:63:74:D0:2F:46:A1:03:37: 92:F0:93:D9:AA:F8:13:C5:06:25:02:B0:FD:3B:41:E7: 62:6F:67:A3:9F:F5:FA:03:41:DA:8D:FD:EA:2F:E3:2B: 3E:F8:E9:CC:3B:9F:E4:ED:73:F2:9E:B9:54:14:C1:34: 68:A7:33:8F:AF:38:85:82:40:A2:06:97:3C:B4:88:43: 7B:AF:5D:87:C4:47:63:4A:11:65:E3:75:55:4D:98:97: C2:2E:62:08:A4:04:35:5A:FE:0A:5A:6E:F1:DE:8E:15: 27:1E:0F:87:33:14:16:2E:57:F7:DC:77:BE:D2:75:AB: A9:7C:42:1F:84:6D:40:EC:E7:ED:84:F8:14:16:28:33: FD:11:CD:C5:FC:49:B7:7B:39:57:B3:E6:36:E5:CD:B6",
"ertificate Revocation List: Data: Version: v2 Signature Algorithm: SHA1withRSA - 1.2.840.113549.1.1.5 Issuer: CN=Certificate Authority,O=SjcRedhat Domain This Update: Wednesday, July 29, 2009 9:02:28 AM GMT-08:00 Next Update: Thursday, July 30, 2009 9:02:28 AM GMT-08:00 Revoked Certificates: Serial Number: 0x1A Revocation Date: Wednesday, July 29, 2009 9:00:48 AM GMT-08:00 Extensions: Identifier: Revocation Reason - 2.5.29.21 Critical: no Reason: Remove_from_CRL Serial Number: 0x17 Revocation Date: Wednesday, July 29, 2009 9:02:16 AM GMT-08:00 Extensions: Identifier: Revocation Reason - 2.5.29.21 Critical: no Reason: Certificate_Hold Identifier: Invalidity Date - 2.5.29.24 Critical: no Invalidity Date: Mon Jul 27 23:00:00 GMT-08:00 2009 Extensions: Identifier: Authority Info Access: - 1.3.6.1.5.5.7.1.1 Critical: no Access Description: Method #0: ocsp Location #0: URIName: http://server.example.com:8443/ca/ocsp Identifier: Delta CRL Indicator - 2.5.29.27 Critical: yes Base CRL Number: 39 Identifier: Issuer Alternative Name - 2.5.29.18 Critical: no Issuer Names: DNSName: a-f8.sjc.redhat.com Identifier: Authority Key Identifier - 2.5.29.35 Critical: no Key Identifier: 50:52:0C:AA:22:AC:8A:71:E3:91:0C:C5:77:21:46:9C: 0F:F8:30:60 Identifier: CRL Number - 2.5.29.20 Critical: no Number: 41 Identifier: Issuing Distribution Point - 2.5.29.28 Critical: yes Distribution Point: Full Name: URIName: http://server.example.com:8443/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL Only Contains User Certificates: no Only Contains CA Certificates: no Indirect CRL: no Signature: Algorithm: SHA1withRSA - 1.2.840.113549.1.1.5 Signature: 68:28:DA:90:D5:39:CB:6D:BE:42:04:77:C9:E4:09:60: C1:97:A6:99:AB:A0:5B:A2:F3:8B:5E:4E:D6:05:70:B0: 87:1F:D7:0E:4B:C6:B2:DE:8B:92:D8:7C:3B:36:1C:79: 96:2A:64:E6:7A:25:1D:E7:40:62:48:7A:24:C9:9D:11: A6:7F:BB:6B:03:A0:9C:1D:BC:1C:EE:9A:4B:A6:48:2C: 3B:5E:2B:B1:70:3C:C3:42:96:28:26:AB:82:18:F2:E9: F2:55:48:A8:7E:7F:FE:D4:3D:0B:EA:A2:2F:4E:E6:C3: C3:C1:6A:E5:C6:85:5B:42:B1:70:2A:C6:E1:D9:0C:AF: DA:01:22:FF:80:6E:2E:A7:E5:34:DC:AF:E6:C2:B5:B3: 1B:FC:28:36:8A:91:4A:22:E7:03:A5:ED:4E:62:0C:D9: 7F:81:BB:80:99:B8:61:2A:02:C6:9C:41:2E:01:82:21: 80:82:69:52:BD:B2:AA:DB:0F:80:0A:7E:2A:F3:15:32: 69:D2:40:0D:39:59:93:75:A2:ED:24:70:FB:EE:19:C0: BE:A2:14:36:D0:AC:E8:E2:EE:23:83:DD:BC:DF:38:1A: 9E:37:AF:E3:50:D9:47:9D:22:7C:36:35:BF:13:2C:16: A2:79:CF:05:41:88:8E:B6:A2:4E:B3:48:6D:69:C6:38"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/crl_extensions |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 1.0-56 Thu May 23 2019 Jiri Herrmann Version for 7.6 GA publication Revision 1.0-55 Thu Oct 25 2018 Jiri Herrmann Version for 7.6 GA publication Revision 1.0-53 Thu Aug 5 2018 Jiri Herrmann Version for 7.6 Beta publication Revision 1.0-52 Thu Apr 5 2018 Jiri Herrmann Version for 7.5 GA publication Revision 1.0-49 Thu Jul 27 2017 Jiri Herrmann Version for 7.4 GA publication Revision 1.0-46 Mon Oct 17 2016 Jiri Herrmann Version for 7.3 GA publication Revision 1.0-44 Mon Dec 21 2015 Laura Novich Republished the guide for several bug fixes Revision 1.0-43 Thu Oct 08 2015 Jiri Herrmann Cleaned up the Revision History Revision 1.0-42 Sun Jun 28 2015 Jiri Herrmann Updated for the 7.2 beta release | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/appe-virtualization_getting_started_guide-revision_history |
function::proc_mem_string_pid | function::proc_mem_string_pid Name function::proc_mem_string_pid - Human readable string of process memory usage Synopsis Arguments pid The pid of process to examine Description Returns a human readable string showing the size, rss, shr, txt and data of the memory used by the given process. For example " size: 301m, rss: 11m, shr: 8m, txt: 52k, data: 2248k " . | [
"proc_mem_string_pid:string(pid:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-string-pid |
Appendix C. A Reference of Identity Management Files and Logs | Appendix C. A Reference of Identity Management Files and Logs C.1. Identity Management Configuration Files and Directories Table C.1. IdM Server and Client Configuration Files and Directories Directory or File Description /etc/ipa/ The main IdM configuration directory. /etc/ipa/default.conf Primary configuration file for IdM. Referenced when servers and clients start and when the user uses the ipa utility. /etc/ipa/server.conf An optional configuration file, does not exist by default. Referenced when the IdM server starts. If the file exists, it takes precedence over /etc/ipa/default.conf . /etc/ipa/cli.conf An optional configuration file, does not exist by default. Referenced when the user uses the ipa utility. If the file exists, it takes precedence over /etc/ipa/default.conf . /etc/ipa/ca.crt The CA certificate issued by the IdM server's CA. ~/.ipa/ The user-specific IdM directory created on the local system the first time the user runs an IdM command. Users can set individual configuration overrides by creating user-specific default.conf , server.conf , or cli.conf files in ~./ipa/ . /etc/sssd/sssd.conf Configuration for the IdM domain and for IdM services used by SSSD. /usr/share/sssd/sssd.api.d/sssd-ipa.conf A schema of IdM-related SSSD options and their values. /etc/gssproxy/ The directory for the configuration of the GSS-Proxy protocol. The directory contains files for each GSS-API service and a general /etc/gssproxy/gssproxy.conf file. /etc/certmonger/certmonger.conf This configuration file contains default settings for the certmonger daemon that monitors certificates for impending expiration. /etc/custodia/custodia.conf Configuration file for the Custodia service that manages secrets for IdM applications. Table C.2. System Service Files and Directories Directory or File Description /etc/sysconfig/ systemd -specific files Table C.3. Web UI Files and Directories Directory or File Description /etc/ipa/html/ A symbolic link for the HTML files used by the IdM web UI. /etc/httpd/conf.d/ipa.conf Configuration files used by the Apache host for the web UI application. /etc/httpd/conf.d/ipa-rewrite.conf /etc/httpd/conf/ipa.keytab The keytab file used by the web server. /usr/share/ipa/ The directory for all HTML files, scripts, and stylesheets used by the web UI. /usr/share/ipa/ipa.conf /usr/share/ipa/updates/ Contains LDAP data, configuration, and schema updates for IdM. /usr/share/ipa/html/ Contains the HTML files, JavaScript files, and stylesheets used by the web UI. /usr/share/ipa/migration/ Contains HTML pages, stylesheets, and Python scripts used for running the IdM server in migration mode. /usr/share/ipa/ui/ Contains the scripts used by the UI to perform IdM operations. /etc/httpd/conf.d/ipa-pki-proxy.conf The configuration file for web-server-to-Certificate-System bridging. Table C.4. Kerberos Files and Directories Directory or File Description /etc/krb5.conf The Kerberos service configuration file. /var/lib/sss/pubconf/krb5.include.d/ Includes IdM-specific overrides for Kerberos client configuration. Table C.5. Directory Server Files and Directories Directory or File Description /var/lib/dirsrv/slapd- REALM_NAME / The database associated with the Directory Server instance used by the IdM server. /etc/sysconfig/dirsrv IdM-specific configuration of the dirsrv systemd service. /etc/dirsrv/slapd- REALM_NAME / The configuration and schema files associated with the Directory Server instance used by the IdM server. Table C.6. Certificate System Files and Directories Directory or File Description /etc/pki/pki-tomcat/ca/ The main directory for the IdM CA instance. /var/lib/pki/pki-tomcat/conf/ca/CS.cfg The main configuration file for the IdM CA instance. Table C.7. Cache Files and Directories Directory or File Description ~/.cache/ipa/ Contains a per-server API schema for the IdM client. IdM caches the API schema on the client for one hour. Table C.8. System Backup Files and Directories Directory or File Description /var/lib/ipa/sysrestore/ Contains backups of the system files and scripts that were reconfigured when the IdM server was installed. Includes the original .conf files for NSS, Kerberos (both krb5.conf and kdc.conf ), and NTP. /var/lib/ipa-client/sysrestore/ Contains backups of the system files and scripts that were reconfigured when the IdM client was installed. Commonly, this is the sssd.conf file for SSSD authentication services. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/config-files-logs |
Chapter 3. Deployment | Chapter 3. Deployment As a storage administrator, you can deploy the Ceph Object Gateway using the Ceph Orchestrator with the command line interface or the service specification. You can also configure multi-site Ceph Object Gateways, and remove the Ceph Object Gateway using the Ceph Orchestrator. The cephadm command deploys the Ceph Object Gateway as a collection of daemons that manages a single-cluster deployment or a particular realm and zone in a multi-site deployment. Note With cephadm , the Ceph Object Gateway daemons are configured using the Ceph Monitor configuration database instead of the ceph.conf file or the command line options. If the configuration is not in the client.rgw section, then the Ceph Object Gateway daemons start up with default settings and bind to port 80 . Warning If you want Cephadm to handle the setting of a realm and zone, specify the realm and zone in the service specification during the deployment of the Ceph Object Gateway. If you want to change that realm or zone at a later point, ensure to update and reapply the rgw_realm and rgw_zone parameters in the specification file. If you want to handle these options manually without Cephadm, do not include them in the service specification. Cephadm still deploys the Ceph Object Gateway daemons without setting the configuration option for which realm or zone the daemons should use. In this case, the update of the specification file is not necesarry. This section covers the following administrative tasks: Deploying the Ceph Object Gateway using the command line interface . Deploying the Ceph Object Gateway using the service specification . Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator . Removing the Ceph Object Gateway using the Ceph Orchestrator . 3.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Root-level access to all the nodes. Available nodes on the storage cluster. All the managers, monitors, and OSDs are deployed in the storage cluster. 3.2. Deploying the Ceph Object Gateway using the command line interface Using the Ceph Orchestrator, you can deploy the Ceph Object Gateway with the ceph orch command in the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example You can deploy the Ceph object gateway daemons in three different ways: Method 1 Create realm, zone group, zone, and then use the placement specification with the host name: Create a realm: Syntax Example Create a zone group: Syntax Example Create a zone: Syntax Example Commit the changes: Syntax Example Run the ceph orch apply command: Syntax Example Method 2 Use an arbitrary service name to deploy two Ceph Object Gateway daemons for a single cluster deployment: Syntax Example Method 3 Use an arbitrary service name on a labeled set of hosts: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph object gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 3.3. Deploying the Ceph Object Gateway using the service specification You can deploy the Ceph Object Gateway using the service specification with either the default or the custom realms, zones, and zone groups. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the bootstrapped host. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure As a root user, create a specification file: Example Edit the radosgw.yml file to include the following details for the default realm, zone, and zone group: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph Object Gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Optional: For custom realm, zone, and zone group, create the resources and then create the radosgw.yml file: Create the custom realm, zone, and zone group: Example Create the radosgw.yml file with the following details: Example Mount the radosgw.yml file under a directory in the container: Example Note Every time you exit the shell, you have to mount the file in the container before deploying the daemon. Deploy the Ceph Object Gateway using the service specification: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 3.4. Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator Ceph Orchestrator supports multi-site configuration options for the Ceph Object Gateway. You can configure each object gateway to work in an active-active zone configuration allowing writes to a non-primary zone. The multi-site configuration is stored within a container called a realm. The realm stores zone groups, zones, and a time period. The rgw daemons handle the synchronization eliminating the need for a separate synchronization agent, thereby operating with an active-active configuration. You can also deploy multi-site zones using the command line interface (CLI). Note The following configuration assumes at least two Red Hat Ceph Storage clusters are in geographically separate locations. However, the configuration also works on the same site. Prerequisites At least two running Red Hat Ceph Storage clusters. At least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster. Root-level access to all the nodes. Nodes or containers are added to the storage cluster. All Ceph Manager, Monitor and OSD daemons are deployed. Procedure In the cephadm shell, configure the primary zone: Create a realm: Syntax Example If the storage cluster has a single realm, then specify the --default flag. Create a primary zone group: Syntax Example Create a primary zone: Syntax Example Optional: Delete the default zone, zone group, and the associated pools. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. Also, removing the default zone group deletes the system user. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Create a system user: Syntax Example Make a note of the access_key and secret_key . Add the access key and system key to the primary zone: Syntax Example Commit the changes: Syntax Example Outside the cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example In the Cephadm shell, configure the secondary zone. Pull the primary realm configuration from the host: Syntax Example Pull the primary period configuration from the host: Syntax Example Configure a secondary zone: Syntax Example Optional: Delete the default zone. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Update the Ceph configuration database: Syntax Example Commit the changes: Syntax Example Outside the Cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example Optional: Deploy multi-site Ceph Object Gateways using the placement specification: Syntax Example Verification Check the synchronization status to verify the deployment: Example 3.5. Removing the Ceph Object Gateway using the Ceph Orchestrator You can remove the Ceph object gateway daemons using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one Ceph object gateway daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example List the service: Example Remove the service: Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the Ceph object gateway using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the Ceph object gateway using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. | [
"cephadm shell",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=default --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default",
"radosgw-admin period update --rgw-realm= REALM_NAME --commit",
"radosgw-admin period update --rgw-realm=test_realm --commit",
"ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"",
"ceph orch apply rgw test --realm=test_realm --zone=test_zone --placement=\"2 host01 host02\"",
"ceph orch apply rgw SERVICE_NAME",
"ceph orch apply rgw foo",
"ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000",
"ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"2 label:rgw\" --port=8000",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"touch radosgw.yml",
"service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network",
"service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"radosgw-admin realm create --rgw-realm=test_realm --default radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup --default radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone --default radosgw-admin period update --rgw-realm=test_realm --commit",
"service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i radosgw.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default",
"radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system",
"radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system",
"radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ --secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service",
"radosgw-admin realm pull --rgw-realm= PRIMARY_REALM --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY --default",
"radosgw-admin realm pull --rgw-realm=test_realm --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --default",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set rgw rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"",
"radosgw-admin sync status",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm rgw.test_realm.test_zone_bb",
"ceph orch ps",
"ceph orch ps"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/object_gateway_guide/deployment |
Chapter 19. KubeScheduler [operator.openshift.io/v1] | Chapter 19. KubeScheduler [operator.openshift.io/v1] Description KubeScheduler provides information to configure an operator to manage scheduler. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes Scheduler status object status is the most recently observed status of the Kubernetes Scheduler 19.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes Scheduler Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 19.1.2. .status Description status is the most recently observed status of the Kubernetes Scheduler Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 19.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 19.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 19.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 19.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 19.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 19.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 19.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubeschedulers DELETE : delete collection of KubeScheduler GET : list objects of kind KubeScheduler POST : create a KubeScheduler /apis/operator.openshift.io/v1/kubeschedulers/{name} DELETE : delete a KubeScheduler GET : read the specified KubeScheduler PATCH : partially update the specified KubeScheduler PUT : replace the specified KubeScheduler /apis/operator.openshift.io/v1/kubeschedulers/{name}/status GET : read status of the specified KubeScheduler PATCH : partially update status of the specified KubeScheduler PUT : replace status of the specified KubeScheduler 19.2.1. /apis/operator.openshift.io/v1/kubeschedulers Table 19.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeScheduler Table 19.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeScheduler Table 19.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.5. HTTP responses HTTP code Reponse body 200 - OK KubeSchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeScheduler Table 19.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.7. Body parameters Parameter Type Description body KubeScheduler schema Table 19.8. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 202 - Accepted KubeScheduler schema 401 - Unauthorized Empty 19.2.2. /apis/operator.openshift.io/v1/kubeschedulers/{name} Table 19.9. Global path parameters Parameter Type Description name string name of the KubeScheduler Table 19.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeScheduler Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 19.12. Body parameters Parameter Type Description body DeleteOptions schema Table 19.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeScheduler Table 19.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.15. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeScheduler Table 19.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.17. Body parameters Parameter Type Description body Patch schema Table 19.18. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeScheduler Table 19.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.20. Body parameters Parameter Type Description body KubeScheduler schema Table 19.21. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 401 - Unauthorized Empty 19.2.3. /apis/operator.openshift.io/v1/kubeschedulers/{name}/status Table 19.22. Global path parameters Parameter Type Description name string name of the KubeScheduler Table 19.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeScheduler Table 19.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.25. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeScheduler Table 19.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.27. Body parameters Parameter Type Description body Patch schema Table 19.28. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeScheduler Table 19.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.30. Body parameters Parameter Type Description body KubeScheduler schema Table 19.31. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/kubescheduler-operator-openshift-io-v1 |
Chapter 9. Using Streams for Apache Kafka with MirrorMaker 2 | Chapter 9. Using Streams for Apache Kafka with MirrorMaker 2 Use MirrorMaker 2 to replicate data between two or more active Kafka clusters, within or across data centers. To configure MirrorMaker 2, edit the config/connect-mirror-maker.properties configuration file. If required, you can enable distributed tracing for MirrorMaker 2 . Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Note MirrorMaker 2 has features not supported by the version of MirrorMaker. However, you can configure MirrorMaker 2 to be used in legacy mode . 9.1. Configuring active/active or active/passive modes You can use MirrorMaker 2 in active/passive or active/active cluster configurations. active/active cluster configuration An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster. active/passive cluster configuration An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination. 9.1.1. Bidirectional replication (active/active) The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 9.1. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 9.1.2. Unidirectional replication (active/passive) The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. 9.2. Configuring MirrorMaker 2 connectors Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters. MirrorMaker 2 consists of the following connectors: MirrorSourceConnector The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run. MirrorCheckpointConnector The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster. MirrorHeartbeatConnector The heartbeat connector periodically checks connectivity between the source and target cluster. The following table describes connector properties and the connectors you configure to use them. Table 9.1. MirrorMaker 2 connector configuration properties Property sourceConnector checkpointConnector heartbeatConnector admin.timeout.ms Timeout for admin tasks, such as detecting new topics. Default is 60000 (1 minute). [✓] [✓] [✓] replication.policy.class Policy to define the remote topic naming convention. Default is org.apache.kafka.connect.mirror.DefaultReplicationPolicy . [✓] [✓] [✓] replication.policy.separator The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.). Separator configuration is only applicable to the DefaultReplicationPolicy replication policy class, which defines remote topic names. The IdentityReplicationPolicy class does not use the property as topics retain their original names. [✓] [✓] [✓] consumer.poll.timeout.ms Timeout when polling the source cluster. Default is 1000 (1 second). [✓] [✓] offset-syncs.topic.location The location of the offset-syncs topic, which can be the source (default) or target cluster. [✓] [✓] topic.filter.class Topic filter to select the topics to replicate. Default is org.apache.kafka.connect.mirror.DefaultTopicFilter . [✓] [✓] config.property.filter.class Topic filter to select the topic configuration properties to replicate. Default is org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter . [✓] config.properties.exclude Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions. [✓] offset.lag.max Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is 100 . [✓] offset-syncs.topic.replication.factor Replication factor for the internal offset-syncs topic. Default is 3 . [✓] refresh.topics.enabled Enables check for new topics and partitions. Default is true . [✓] refresh.topics.interval.seconds Frequency of topic refresh. Default is 600 (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes. You can change the frequency by adding refresh.topics.interval.seconds to the source connector configuration. [✓] replication.factor The replication factor for new topics. Default is 2 . [✓] sync.topic.acls.enabled Enables synchronization of ACLs from the source cluster. Default is true . For more information, see Section 9.5, "ACL rules synchronization" . [✓] sync.topic.acls.interval.seconds Frequency of ACL synchronization. Default is 600 (10 minutes). [✓] sync.topic.configs.enabled Enables synchronization of topic configuration from the source cluster. Default is true . [✓] sync.topic.configs.interval.seconds Frequency of topic configuration synchronization. Default 600 (10 minutes). [✓] checkpoints.topic.replication.factor Replication factor for the internal checkpoints topic. Default is 3 . [✓] emit.checkpoints.enabled Enables synchronization of consumer offsets to the target cluster. Default is true . [✓] emit.checkpoints.interval.seconds Frequency of consumer offset synchronization. Default is 60 (1 minute). [✓] group.filter.class Group filter to select the consumer groups to replicate. Default is org.apache.kafka.connect.mirror.DefaultGroupFilter . [✓] refresh.groups.enabled Enables check for new consumer groups. Default is true . [✓] refresh.groups.interval.seconds Frequency of consumer group refresh. Default is 600 (10 minutes). [✓] sync.group.offsets.enabled Enables synchronization of consumer group offsets to the target cluster __consumer_offsets topic. Default is false . [✓] sync.group.offsets.interval.seconds Frequency of consumer group offset synchronization. Default is 60 (1 minute). [✓] emit.heartbeats.enabled Enables connectivity checks on the target cluster. Default is true . [✓] emit.heartbeats.interval.seconds Frequency of connectivity checks. Default is 1 (1 second). [✓] heartbeats.topic.replication.factor Replication factor for the internal heartbeats topic. Default is 3 . [✓] 9.2.1. Changing the location of the consumer group offsets topic MirrorMaker 2 tracks offsets for consumer groups using internal topics. offset-syncs topic The offset-syncs topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints topic The checkpoints topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. As they are used internally by MirrorMaker 2, you do not interact directly with these topics. MirrorCheckpointConnector emits checkpoints for offset tracking. Offsets for the checkpoints topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. The location of the offset-syncs topic is the source cluster by default. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster. 9.2.2. Synchronizing consumer group offsets The __consumer_offsets topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster. Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true . Synchronization is disabled by default. When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics. Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID error is returned. If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Note If you have an application written in Java, you can use the RemoteClusterUtils.java utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints topic. 9.2.3. Deciding when to use the heartbeat connector The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat topic is located on the target cluster, which allows it to do the following: Identify all source clusters it is mirroring data from Verify the liveness and latency of the mirroring process This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration. 9.2.4. Aligning the configuration of MirrorMaker 2 connectors To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors: replication.policy.class replication.policy.separator offset-syncs.topic.location topic.filter.class For example, the value for replication.policy.class must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings. 9.3. Connector producer and consumer configuration MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings. Important Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change. Producer and consumer configuration applies to all connectors. You specify the configuration in the config/connect-mirror-maker.properties file. Use the properties file to override any default configuration for the producers and consumers in the following format: <source_cluster_name> .consumer. <property> <source_cluster_name> .producer. <property> <target_cluster_name> .consumer. <property> <target_cluster_name> .producer. <property> The following example shows how you configure the producers and consumers. Though the properties are set for all connectors, some configuration properties are only relevant to certain connectors. Example configuration for connector producers and consumers clusters=cluster-1,cluster-2 # ... cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000 9.4. Specifying a maximum number of tasks Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups. Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don't need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks. You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasks.max property. Without specifying a maximum number of tasks, the default setting is a single task. The heartbeat connector always uses a single task. The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasks.max . For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups. tasks.max configuration for MirrorMaker connectors clusters=cluster-1,cluster-2 # ... tasks.max = 10 By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance. 9.5. ACL rules synchronization If AclAuthorizer is being used, ACL rules that manage access to brokers also apply to remote topics. Users that can read a source topic can read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 9.6. Running MirrorMaker 2 in dedicated mode Use MirrorMaker 2 to synchronize data between Kafka clusters through configuration. This procedure shows how to configure and run a dedicated single-node MirrorMaker 2 cluster. Dedicated clusters use Kafka Connect worker nodes to mirror data between Kafka clusters. Note It is also possible to run MirrorMaker 2 in distributed mode. MirrorMaker 2 operates as connectors in both dedicated and distributed modes. When running a dedicated MirrorMaker cluster, connectors are configured in the Kafka Connect cluster. As a consequence, this allows direct access to the Kafka Connect cluster, the running of additional connectors, and use of the REST API. For more information, refer to the Apache Kafka documentation . The version of MirrorMaker continues to be supported, by running MirrorMaker 2 in legacy mode . The configuration must specify: Each Kafka cluster Connection information for each cluster, including TLS authentication The replication flow and direction Cluster to cluster Topic to topic Replication rules Committed offset tracking intervals This procedure describes how to implement MirrorMaker 2 by creating the configuration in a properties file, then passing the properties when using the MirrorMaker script file to set up the connections. You can specify the topics and consumer groups you wish to replicate from a source cluster. You specify the names of the source and target clusters, then specify the topics and consumer groups to replicate. In the following example, topics and consumer groups are specified for replication from cluster 1 to 2. Example configuration to replicate specific topics and consumer groups clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2 You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set these properties. You can also replicate all topics and consumer groups by using .* as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster. Before you begin A sample configuration properties file is provided in ./config/connect-mirror-maker.properties . Prerequisites You need Streams for Apache Kafka installed on the hosts of each Kafka cluster node you are replicating. Procedure Open the sample properties file in a text editor, or create a new one, and edit the file to include connection information and the replication flows for each Kafka cluster. The following example shows a configuration to connect two clusters, cluster-1 and cluster-2 , bidirectionally. Cluster names are configurable through the clusters property. Example MirrorMaker 2 configuration clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15 1 Each Kafka cluster is identified with its alias. 2 Connection information for cluster-1 , using the bootstrap address and port 443 . Both clusters use port 443 to connect to Kafka using OpenShift Routes . 3 The ssl. properties define TLS configuration for cluster-1 . 4 Connection information for cluster-2 . 5 The ssl. properties define the TLS configuration for cluster-2 . 6 Replication flow enabled from cluster-1 to cluster-2 . 7 Replication flow enabled from cluster-2 to cluster-1 . 8 Replication of all topics from cluster-1 to cluster-2 . The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. 9 Replication of specific topics from cluster-2 to cluster-1 . 10 Replication of all consumer groups from cluster-1 to cluster-2 . The checkpoint connector replicates the specified consumer groups. 11 Replication of specific consumer groups from cluster-2 to cluster-1 . 12 Defines the separator used for the renaming of remote topics. 13 When enabled, ACLs are applied to synchronized topics. The default is false . 14 The period between checks for new topics to synchronize. 15 The period between checks for new consumer groups to synchronize. OPTION: If required, add a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is used for active/passive backups and data migration. replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy OPTION: If you want to synchronize consumer group offsets, add configuration to enable and manage the synchronization: refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3 1 Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default. 2 If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. 3 Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks. Start ZooKeeper and Kafka in the target clusters: su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon \ /opt/kafka/config/zookeeper.properties /opt/kafka/bin/kafka-server-start.sh -daemon \ /opt/kafka/config/server.properties Start MirrorMaker with the cluster connection configuration and replication policies you defined in your properties file: /opt/kafka/bin/connect-mirror-maker.sh \ /opt/kafka/config/connect-mirror-maker.properties MirrorMaker sets up connections between the clusters. For each target cluster, verify that the topics are being replicated: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list 9.7. (Deprecated) Using MirrorMaker 2 in legacy mode This procedure describes how to configure MirrorMaker 2 to use it in legacy mode. Legacy mode supports the version of MirrorMaker. The MirrorMaker script /opt/kafka/bin/kafka-mirror-maker.sh can run MirrorMaker 2 in legacy mode. Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. Kafka MirrorMaker 1 will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use MirrorMaker 2 with the IdentityReplicationPolicy . Prerequisites You need the properties files you currently use with the legacy version of MirrorMaker. /opt/kafka/config/consumer.properties /opt/kafka/config/producer.properties Procedure Edit the MirrorMaker consumer.properties and producer.properties files to turn off MirrorMaker 2 features. For example: replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false 1 Emulate the version of MirrorMaker. 2 MirrorMaker 2 features disabled, including the internal checkpoint and heartbeat topics Save the changes and restart MirrorMaker with the properties files you used with the version of MirrorMaker: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh \ --consumer.config /opt/kafka/config/consumer.properties \ --producer.config /opt/kafka/config/producer.properties \ --num.streams=2 The consumer properties provide the configuration for the source cluster and the producer properties provide the target cluster configuration. MirrorMaker sets up connections between the clusters. Start ZooKeeper and Kafka in the target cluster: su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties For the target cluster, verify that the topics are being replicated: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list | [
"clusters=cluster-1,cluster-2 cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000",
"clusters=cluster-1,cluster-2 tasks.max = 10",
"clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2",
"clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15",
"replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy",
"refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3",
"su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"/opt/kafka/bin/connect-mirror-maker.sh /opt/kafka/config/connect-mirror-maker.properties",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list",
"replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false",
"su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2",
"su - kafka /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-mirrormaker-str |
Chapter 25. Verifying system certificates using IdM Healthcheck | Chapter 25. Verifying system certificates using IdM Healthcheck Learn more about identifying issues with system certificates in Identity Management (IdM) by using the Healthcheck tool. For details, see Healthcheck in IdM . 25.1. System certificates Healthcheck tests The Healthcheck tool includes several tests for verifying system (DogTag) certificates. To see all tests, run the ipa-healthcheck with the --list-sources option: You can find all tests under the ipahealthcheck.dogtag.ca source: DogtagCertsConfigCheck This test compares the CA (Certificate Authority) certificates in its NSS database to the same values stored in CS.cfg . If they do not match, the CA fails to start. Specifically, it checks: auditSigningCert cert-pki-ca against ca.audit_signing.cert ocspSigningCert cert-pki-ca against ca.ocsp_signing.cert caSigningCert cert-pki-ca against ca.signing.cert subsystemCert cert-pki-ca against ca.subsystem.cert Server-Cert cert-pki-ca against ca.sslserver.cert If Key Recovery Authority (KRA) is installed: transportCert cert-pki-kra against ca.connector.KRA.transportCert DogtagCertsConnectivityCheck This test verifies connectivity. This test is equivalent to the ipa cert-show 1 command which checks: The PKI proxy configuration in Apache IdM being able to find a CA The RA agent client certificate Correctness of CA replies to requests Note that the test checks a certificate with serial #1 because you want to verify that a cert-show can be executed and get back an expected result from CA (either the certificate or a not found). Note Run these tests on all IdM servers when trying to find an issue. 25.2. Screening system certificates using Healthcheck Follow this procedure to run a standalone manual test of Identity Management (IdM) certificates using the Healthcheck tool. Since, the Healthcheck tool includes many tests, you can narrow the results by including only DogTag tests: --source=ipahealthcheck.dogtag.ca Procedure To run Healthcheck restricted to DogTag certificates, enter: An example of a successful test: An example of a failed test: Additional resources See man ipa-healthcheck . | [
"ipa-healthcheck --list-sources",
"ipa-healthcheck --source=ipahealthcheck.dogtag.ca",
"{ \"source: ipahealthcheck.dogtag.ca\", \"check: DogtagCertsConfigCheck\", \"result: SUCCESS\", \"uuid: 9b366200-9ec8-4bd9-bb5e-9a280c803a9c\", \"when: 20191008135826Z\", \"duration: 0.252280\", \"kw:\" { \"key\": \"Server-Cert cert-pki-ca\", \"configfile\": \"/var/lib/pki/pki-tomcat/conf/ca/CS.cfg\" } }",
"{ \"source: ipahealthcheck.dogtag.ca\", \"check: DogtagCertsConfigCheck\", \"result: CRITICAL\", \"uuid: 59d66200-1447-4b3b-be01-89810c803a98\", \"when: 20191008135912Z\", \"duration: 0.002022\", \"kw:\" { \"exception\": \"NSDB /etc/pki/pki-tomcat/alias not initialized\", } }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/verifying-system-certificates-using-idm-healthcheck_managing-certificates-in-idm |
Chapter 5. Configuring Satellite Server with External Services | Chapter 5. Configuring Satellite Server with External Services If you do not want to configure the DNS, DHCP, and TFTP services on Satellite Server, use this section to configure your Satellite Server to work with external DNS, DHCP and TFTP services. 5.1. Configuring Satellite Server with External DNS You can configure Satellite Server with external DNS. Satellite Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Satellite Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 5.2. Configuring Satellite Server with External DHCP To configure Satellite Server with external DHCP, you must complete the following procedures: Section 5.2.1, "Configuring an External DHCP Server to Use with Satellite Server" Section 5.2.2, "Configuring Satellite Server with an External DHCP Server" 5.2.1. Configuring an External DHCP Server to Use with Satellite Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Satellite Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) or its utility packages. You must also share the DHCP configuration and lease files with Satellite Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, clients fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and BIND packages or its utility packages depending on your host version. For Red Hat Enterprise Linux 7 host: For Red Hat Enterprise Linux 8 host: Generate a security token: As a result, a key pair that consists of two files is created in the current directory. Copy the secret hash from the key: Edit the dhcpd configuration file for all subnets and add the key. The following is an example: Note that the option routers value is the Satellite or Capsule IP address that you want to use with an external DHCP service. Delete the two key files from the directory that they were created in. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. 5.2.2. Configuring Satellite Server with an External DHCP Server You can configure Satellite Server with an external DHCP server. Prerequisite Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Satellite Server. For more information, see Section 5.2.1, "Configuring an External DHCP Server to Use with Satellite Server" . Procedure Install the nfs-utils utility: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Restart the foreman-proxy service: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DHCP service with the appropriate subnets and domain. 5.3. Configuring Satellite Server with External TFTP You can configure Satellite Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 5.4. Configuring Satellite Server with External IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage using the IdM server. Satellite Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Linux Domain Identity, Authentication, and Policy Guide . To configure Satellite Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 5.4.1, "Configuring Dynamic DNS Update with GSS-TSIG Authentication" Section 5.4.2, "Configuring Dynamic DNS Update with TSIG Authentication" To revert to internal DNS service, use the following procedure: Section 5.4.3, "Reverting to Internal DNS Service" Note You are not required to use Satellite Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Satellite Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see External Authentication for Provisioned Hosts in the Administering Red Hat Satellite guide. 5.4.1. Configuring Dynamic DNS Update with GSS-TSIG Authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Satellite Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos Principal on the IdM Server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Satellite Server to use to authenticate on the IdM server. Installing and Configuring the IdM Client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS Zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that Manages the DNS Service for the Domain Use the satellite-installer command to configure the Satellite or Capsule that manages the DNS Service for the domain: On Satellite, enter the following command: On Capsule, enter the following command: After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the Configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Satellite Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 5.4.2. Configuring Dynamic DNS Update with TSIG Authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling External Updates to the DNS Zone in the IdM Server On the IdM Server, add the following to the top of the /etc/named.conf file: Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing External Updates to the DNS Zone in the IdM Server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 5.4.3. Reverting to Internal DNS Service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS Server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information,see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the Configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. | [
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"yum install dhcp bind",
"yum install dhcp-server bind-utils",
"dnssec-keygen -a HMAC-MD5 -b 512 -n HOST omapi_key",
"grep ^Key Komapi_key.+*.private | cut -d ' ' -f2",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm HMAC-MD5; secret \"jNSE5YI3H1A8Oj/tkV4...A2ZOHb6zv315CkNAY7DMYYCj48Umw==\"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp && firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl start dhcpd",
"yum install nfs-utils systemctl enable rpcbind nfs-server systemctl start rpcbind nfs-server nfs-lock nfs-idmapd",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp firewall-cmd --runtime-to-permanent",
"firewall-cmd --zone public --add-service mountd && firewall-cmd --zone public --add-service rpc-bind && firewall-cmd --zone public --add-service nfs && firewall-cmd --runtime-to-permanent",
"yum install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash bash-4.2USD cat /mnt/nfs/etc/dhcp/dhcpd.conf bash-4.2USD cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases bash-4.2USD exit",
"satellite-installer --foreman-proxy-dhcp=true --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret=jNSE5YI3H1A8Oj/tkV4...A2ZOHb6zv315CkNAY7DMYYCj48Umw== --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911 --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-server= DHCP_Server_FQDN",
"systemctl restart foreman-proxy",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp=true --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule/satellite.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule/047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --scenario satellite --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab",
"satellite-installer --scenario capsule --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --scenario satellite --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-keyfile=/etc/rndc.key --foreman-proxy-dns-ttl=86400",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/configuring-external-services |
Chapter 4. About Kafka Connect | Chapter 4. About Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems. The other system is typically an external data source or target, such as a database. Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems. Plugins provide a set of one or more artifacts that define a connector and task implementation for connecting to a given kind of data source. The configuration describes the source input data and target output data to feed into and out of Kafka Connect. The plugins might also contain the libraries and files needed to transform the data. A Kafka Connect deployment can have one or more plugins, but only one version of each plugin. Plugins for many external systems are available for use with Kafka Connect. You can also create your own plugins. Streams for Apache Kafka operates Kafka Connect in distributed mode , distributing data streaming tasks across one or more worker pods. A Kafka Connect cluster comprises a group of worker pods. Each connector is instantiated on a single worker. Each connector comprises one or more tasks that are distributed across the group of workers. Distribution across workers permits highly scalable pipelines. Workers convert data from one format into another format that's suitable for the source or target system. Depending on the configuration of the connector instance, workers might also apply transforms (also known as Single Message Transforms, or SMTs). Transforms adjust messages, such as filtering certain data, before they are converted. Kafka Connect has some built-in transforms, but other transformations can be provided by plugins if necessary. 4.1. How Kafka Connect streams data Kafka Connect uses connector instances to integrate with other systems to stream data. Kafka Connect loads existing connector instances on start up and distributes data streaming tasks and connector configuration across worker pods. Workers run the tasks for the connector instances. Each worker runs as a separate pod to make the Kafka Connect cluster more fault tolerant. If there are more tasks than workers, workers are assigned multiple tasks. If a worker fails, its tasks are automatically assigned to active workers in the Kafka Connect cluster. The main Kafka Connect components used in streaming data are as follows: Connectors to create tasks Tasks to move data Workers to run tasks Transforms to manipulate data Converters to convert data 4.1.1. Connectors Connectors can be one of the following type: Source connectors that push data into Kafka Sink connectors that extract data out of Kafka Plugins provide the implementation for Kafka Connect to run connector instances. Connector instances create the tasks required to transfer data in and out of Kafka. The Kafka Connect runtime orchestrates the tasks to split the work required between the worker pods. MirrorMaker 2 also uses the Kafka Connect framework. In this case, the external data system is another Kafka cluster. Specialized connectors for MirrorMaker 2 manage data replication between source and target Kafka clusters. Note In addition to the MirrorMaker 2 connectors, Kafka provides two connectors as examples: FileStreamSourceConnector streams data from a file on the worker's filesystem to Kafka, reading the input file and sending each line to a given Kafka topic. FileStreamSinkConnector streams data from Kafka to the worker's filesystem, reading messages from a Kafka topic and writing a line for each in an output file. The following source connector diagram shows the process flow for a source connector that streams records from an external data system. A Kafka Connect cluster might operate source and sink connectors at the same time. Workers are running in distributed mode in the cluster. Workers can run one or more tasks for more than one connector instance. Source connector streaming data to Kafka A plugin provides the implementation artifacts for the source connector A single worker initiates the source connector instance The source connector creates the tasks to stream data Tasks run in parallel to poll the external data system and return records Transforms adjust the records, such as filtering or relabelling them Converters put the records into a format suitable for Kafka The source connector is managed using KafkaConnectors or the Kafka Connect API The following sink connector diagram shows the process flow when streaming data from Kafka to an external data system. Sink connector streaming data from Kafka A plugin provides the implementation artifacts for the sink connector A single worker initiates the sink connector instance The sink connector creates the tasks to stream data Tasks run in parallel to poll Kafka and return records Converters put the records into a format suitable for the external data system Transforms adjust the records, such as filtering or relabelling them The sink connector is managed using KafkaConnectors or the Kafka Connect API 4.1.2. Tasks Data transfer orchestrated by the Kafka Connect runtime is split into tasks that run in parallel. A task is started using the configuration supplied by a connector instance. Kafka Connect distributes the task configurations to workers, which instantiate and execute tasks. A source connector task polls the external data system and returns a list of records that a worker sends to the Kafka brokers. A sink connector task receives Kafka records from a worker for writing to the external data system. For sink connectors, the number of tasks created relates to the number of partitions being consumed. For source connectors, how the source data is partitioned is defined by the connector. You can control the maximum number of tasks that can run in parallel by setting tasksMax in the connector configuration. The connector might create fewer tasks than the maximum setting. For example, the connector might create fewer tasks if it's not possible to split the source data into that many partitions. Note In the context of Kafka Connect, a partition can mean a topic partition or a shard of data in an external system. 4.1.3. Workers Workers employ the connector configuration deployed to the Kafka Connect cluster. The configuration is stored in an internal Kafka topic used by Kafka Connect. Workers also run connectors and their tasks. A Kafka Connect cluster contains a group of workers with the same group.id . The ID identifies the cluster within Kafka. The ID is assigned in the worker configuration through the KafkaConnect resource. Worker configuration also specifies the names of internal Kafka Connect topics. The topics store connector configuration, offset, and status information. The group ID and names of these topics must also be unique to the Kafka Connect cluster. Workers are assigned one or more connector instances and tasks. The distributed approach to deploying Kafka Connect is fault tolerant and scalable. If a worker pod fails, the tasks it was running are reassigned to active workers. You can add to a group of worker pods through configuration of the replicas property in the KafkaConnect resource. 4.1.4. Transforms Kafka Connect translates and transforms external data. Single-message transforms change messages into a format suitable for the target destination. For example, a transform might insert or rename a field. Transforms can also filter and route data. Plugins contain the implementation required for workers to perform one or more transformations. Source connectors apply transforms before converting data into a format supported by Kafka. Sink connectors apply transforms after converting data into a format suitable for an external data system. A transform comprises a set of Java class files packaged in a JAR file for inclusion in a connector plugin. Kafka Connect provides a set of standard transforms, but you can also create your own. 4.1.5. Converters When a worker receives data, it converts the data into an appropriate format using a converter. You specify converters for workers in the worker config in the KafkaConnect resource. Kafka Connect can convert data to and from formats supported by Kafka, such as JSON or Avro. It also supports schemas for structuring data. If you are not converting data into a structured format, you don't need to enable schemas. Note You can also specify converters for specific connectors to override the general Kafka Connect worker configuration that applies to all workers. Additional resources Apache Kafka documentation Kafka Connect configuration of workers Synchronizing data between Kafka clusters using MirrorMaker 2 | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/kafka-connect-components_str |
4.4.3. Swapping | 4.4.3. Swapping While swapping (writing modified pages out to the system swap space) is a normal part of a system's operation, it is possible to experience too much swapping. The reason to be wary of excessive swapping is that the following situation can easily occur, over and over again: Pages from a process are swapped The process becomes runnable and attempts to access a swapped page The page is faulted back into memory (most likely forcing some other processes' pages to be swapped out) A short time later, the page is swapped out again If this sequence of events is widespread, it is known as thrashing and is indicative of insufficient RAM for the present workload. Thrashing is extremely detrimental to system performance, as the CPU and I/O loads that can be generated in such a situation quickly outweigh the load imposed by a system's real work. In extreme cases, the system may actually do no useful work, spending all its resources moving pages to and from memory. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-memory-concepts-swapping |
8.8. Selecting Network Team Configuration Methods | 8.8. Selecting Network Team Configuration Methods To configure a network team using NetworkManager 's text user interface tool, nmtui , proceed to Section 8.9, "Configure a Network Team Using the Text User Interface, nmtui" To create a network team using the command-line tool , nmcli , proceed to Section 8.10.1, "Configure Network Teaming Using nmcli" . To create a network team using the Team daemon , teamd , proceed to Section 8.10.2, "Creating a Network Team Using teamd" . To create a network team using configuration files , proceed to Section 8.10.3, "Creating a Network Team Using ifcfg Files" . To configure a network team using a graphical user interface , see Section 8.14, "Creating a Network Team Using a GUI" | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Selecting_Network_Team_Configuration_Methods |
Chapter 8. Deployments | Chapter 8. Deployments 8.1. Understanding deployments The Deployment and DeploymentConfig API objects in OpenShift Container Platform provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects: A Deployment or DeploymentConfig object, either of which describes the desired state of a particular component of the application as a pod template. Deployment objects involve one or more replica sets , which contain a point-in-time record of the state of a deployment as a pod template. Similarly, DeploymentConfig objects involve one or more replication controllers , which preceded replica sets. One or more pods, which represent an instance of a particular version of an application. Use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 8.1.1. Building blocks of a deployment Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController , respectively, as their building blocks. Users do not have to manipulate replica sets, replication controllers, or pods owned by Deployment or DeploymentConfig objects. The deployment systems ensure changes are propagated appropriately. Tip If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy. The following sections provide further details on these objects. 8.1.1.1. Replica sets A ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. Note Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create. The following is an example ReplicaSet definition: apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. 2 Equality-based selector to specify resources with labels that match the selector. 3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend . 8.1.1.2. Replication controllers Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements. A replication controller configuration consists of: The number of replicas desired, which can be adjusted at run time. A Pod definition to use when creating a replicated pod. A selector for identifying managed pods. A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed. The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler. Note Use a DeploymentConfig to create a replication controller instead of creating replication controllers directly. If you require custom orchestration or do not require updates, use replica sets instead of replication controllers. The following is an example definition of a replication controller: apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod the controller creates. 4 Labels on the pod should include those from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 8.1.2. Deployments Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployment . Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles. For example, the following deployment definition creates a replica set to bring up one hello-openshift pod: Deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80 8.1.3. DeploymentConfig objects Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods. However, OpenShift Container Platform deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller. The DeploymentConfig deployment system provides the following capabilities: A DeploymentConfig object, which is a template for running applications. Triggers that drive automated deployments in response to events. User-customizable deployment strategies to transition from the version to the new version. A strategy runs inside a pod commonly referred as the deployment process. A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment. Versioning of your application to support rollbacks either manually or automatically in case of deployment failure. Manual replication scaling and autoscaling. When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object's pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one. Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally. The OpenShift Container Platform DeploymentConfig object defines the following details: The elements of a ReplicationController definition. Triggers for creating a new deployment automatically. The strategy for transitioning between deployments. Lifecycle hooks. Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the replication controller is retained to enable easy rollback if needed. Example DeploymentConfig definition apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3 1 A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration. 2 An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream. 3 The default Rolling strategy makes a downtime-free transition between deployments. 8.1.4. Comparing Deployment and DeploymentConfig objects Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. The following sections go into more detail on the differences between the two object types to further help you decide which type to use. Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 8.1.4.1. Design One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod. However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs. 8.1.4.2. Deployment-specific features Rollover The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects that use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one. DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for Deployment objects. Proportional scaling Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set. DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller. Pausing mid-rollout Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes. 8.1.4.3. DeploymentConfig object-specific features Automatic rollbacks Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. Triggers Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment: USD oc rollout pause deployments/<name> Lifecycle hooks Deployments do not yet support any lifecycle hooks. Custom strategies Deployments do not support user-specified custom deployment strategies. 8.2. Managing deployment processes 8.2.1. Managing DeploymentConfig objects Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. DeploymentConfig objects can be managed from the OpenShift Container Platform web console's Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated. 8.2.1.1. Starting a deployment You can start a rollout to begin the deployment process of your application. Procedure To start a new deployment process from an existing DeploymentConfig object, run the following command: USD oc rollout latest dc/<name> Note If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed. 8.2.1.2. Viewing a deployment You can view a deployment to get basic information about all the available revisions of your application. Procedure To show details about all recently created replication controllers for the provided DeploymentConfig object, including any currently running deployment process, run the following command: USD oc rollout history dc/<name> To view details specific to a revision, add the --revision flag: USD oc rollout history dc/<name> --revision=1 For more detailed information about a DeploymentConfig object and its latest revision, use the oc describe command: USD oc describe dc <name> 8.2.1.3. Retrying a deployment If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process. Procedure To restart a failed deployment process: USD oc rollout retry dc/<name> If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried. Note Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed. 8.2.1.4. Rolling back a deployment Rollbacks revert an application back to a revision and can be performed using the REST API, the CLI, or the web console. Procedure To rollback to the last successful deployed revision of your configuration: USD oc rollout undo dc/<name> The DeploymentConfig object's template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with --to-revision , then the last successfully deployed revision is used. Image change triggers on the DeploymentConfig object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers: USD oc set triggers dc/<name> --auto Note Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. 8.2.1.5. Executing commands inside a container You can add a command to a container, which modifies the container's startup behavior by overruling the image's ENTRYPOINT . This is different from a lifecycle hook, which instead can be run once per deployment at a specified time. Procedure Add the command parameters to the spec field of the DeploymentConfig object. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist). kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar # ... 8.2.1.6. Viewing deployment logs Procedure To stream the logs of the latest revision for a given DeploymentConfig object: USD oc logs -f dc/<name> If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application. You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually: USD oc logs --version=1 dc/<name> 8.2.1.7. Deployment triggers A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster. Warning If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually. Config change deployment triggers The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object. Note If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused. Config change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ConfigChange" Image change deployment triggers The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed). Image change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ImageChange" imageChangeParams: automatic: true 1 from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" namespace: "myproject" containerNames: - "helloworld" 1 If the imageChangeParams.automatic field is set to false , the trigger is disabled. With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object's helloworld container, a new replication controller is created using the new image for the helloworld container. Note If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false , or with automatic=true ) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag. 8.2.1.7.1. Setting deployment triggers Procedure You can set deployment triggers for a DeploymentConfig object using the oc set triggers command. For example, to set a image change trigger, use the following command: USD oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name> 8.2.1.8. Setting deployment resources A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits. Note The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies. Procedure In the following example, each of resources , cpu , memory , and ephemeral-storage is optional: kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). 3 ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process. To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota. Additional resources For more information about resource limits and requests, see Understanding managing application memory . 8.2.1.9. Scaling manually In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them. Note Pods can also be auto-scaled using the oc autoscale command. Procedure To manually scale a DeploymentConfig object, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig object to 3 . USD oc scale dc frontend --replicas=3 The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig object frontend . 8.2.1.10. Accessing private repositories from DeploymentConfig objects You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method. Procedure Create a new project. Navigate to Workloads Secrets . Create a secret that contains credentials for accessing a private image repository. Navigate to Workloads DeploymentConfigs . Create a DeploymentConfig object. On the DeploymentConfig object editor page, set the Pull Secret and save your changes. 8.2.1.11. Assigning pods to specific nodes You can use node selectors in conjunction with labeled nodes to control pod placement. Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further. Procedure To add a node selector when creating a pod, edit the Pod configuration, and add the nodeSelector value. This can be added to a single Pod configuration, or in a Pod template: apiVersion: v1 kind: Pod metadata: name: my-pod # ... spec: nodeSelector: disktype: ssd # ... Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator. For example, if a project has the type=user-node and region=east labels added to a project by the cluster administrator, and you add the above disktype: ssd label to a pod, the pod is only ever scheduled on nodes that have all three labels. Note Labels can only be set to one value, so setting a node selector of region=west in a Pod configuration that has region=east as the administrator-set default, results in a pod that will never be scheduled. 8.2.1.12. Running a pod with a different service account You can run a pod with a service account other than the default. Procedure Edit the DeploymentConfig object: USD oc edit dc/<deployment_config> Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc # ... spec: # ... securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account> 8.3. Using deployment strategies Deployment strategies are used to change or upgrade applications without downtime so that users barely notice a change. Because users generally access applications through a route handled by a router, deployment strategies can focus on DeploymentConfig object features or routing features. Strategies that focus on DeploymentConfig object features impact all routes that use the application. Strategies that use router features target individual routes. Most deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. 8.3.1. Choosing a deployment strategy Consider the following when choosing a deployment strategy: Long-running connections must be handled gracefully. Database conversions can be complex and must be done and rolled back along with the application. If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. You must have the infrastructure to do this. If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m , a value set in TimeoutSeconds in dc.spec.strategy.*params . 8.3.2. Rolling strategy A rolling deployment slowly replaces instances of the version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a rolling deployment: When you want to take no downtime during an application update. When your application supports having old code and new code running at the same time. A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility. Example rolling strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {} 1 The time to wait between individual pod updates. If unspecified, this value defaults to 1 . 2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1 . 3 The time to wait for a scaling event before giving up. Optional; the default is 600 . Here, giving up means automatically rolling back to the complete deployment. 4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure. 5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure. 6 pre and post are both lifecycle hooks. The rolling strategy: Executes any pre lifecycle hook. Scales up the new replication controller based on the surge count. Scales down the old replication controller based on the max unavailable count. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero. Executes any post lifecycle hook. Important When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10% ) or an absolute value (e.g., 2 ). The default value for both is 25% . These parameters allow the deployment to be tuned for availability and speed. For example: maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up. maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update). maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss. Generally, if you want fast rollouts, use maxSurge . If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable . Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 8.3.2.1. Canary deployments All rolling deployments in OpenShift Container Platform are canary deployments ; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy. 8.3.2.2. Creating a rolling deployment Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI. Procedure Create an application based on the example deployment images found in Quay.io : USD oc new-app quay.io/openshifttest/deployment-example:latest Note This image does not expose any ports. If you want to expose your applications over an external LoadBalancer service or enable access to the application over the public internet, create a service by using the oc expose dc/deployment-example --port=<port> command after completing this procedure. If you have the router installed, make the application available via a route or use the service IP directly. USD oc expose svc/deployment-example Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image. Scale the DeploymentConfig object up to three replicas: USD oc scale dc/deployment-example --replicas=3 Trigger a new deployment automatically by tagging a new version of the example as the latest tag: USD oc tag deployment-example:v2 deployment-example:latest In your browser, refresh the page until you see the v2 image. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: USD oc describe dc deployment-example During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues. If the pods do not become ready, the process aborts, and the deployment rolls back to its version. 8.3.2.3. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.2.4. Starting a rolling deployment using the Developer perspective You can upgrade an application by starting a rolling deployment. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure In the Topology view, click the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 8.1. Rolling update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 8.3.3. Recreate strategy The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process. Example recreate strategy definition kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {} 1 recreateParams are optional. 2 pre , mid , and post are lifecycle hooks. The recreate strategy: Executes any pre lifecycle hook. Scales down the deployment to zero. Executes any mid lifecycle hook. Scales up the new deployment. Executes any post lifecycle hook. Important During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. When to use a recreate deployment: When you must run migrations or other data transformations before your new code starts. When you do not support having new and old versions of your application code running at the same time. When you want to use a RWO volume, which is not supported being shared between multiple replicas. A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. 8.3.3.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.3.2. Starting a recreate deployment using the Developer perspective You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console. Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To switch to a recreate update strategy and to upgrade an application: Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application. In the YAML editor, change the spec.strategy.type to Recreate and click Save . In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate . Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. Figure 8.2. Recreate update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 8.3.4. Custom strategy The custom strategy allows you to provide your own deployment behavior. Example custom strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1 In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image's Dockerfile . The optional environment variables provided are added to the execution environment of the strategy process. Additionally, OpenShift Container Platform provides the following environment variables to the deployment process: Environment variable Description OPENSHIFT_DEPLOYMENT_NAME The name of the new deployment, a replication controller. OPENSHIFT_DEPLOYMENT_NAMESPACE The name space of the new deployment. The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Container Platform deployer image is used instead: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete This results in following deployment: Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication. 8.3.4.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.5. Lifecycle hooks The rolling and recreate strategies support lifecycle hooks , or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy: Example pre lifecycle hook pre: failurePolicy: Abort execNewPod: {} 1 1 execNewPod is a pod-based lifecycle hook. Every hook has a failure policy , which defines the action the strategy should take when a hook failure is encountered: Abort The deployment process will be considered a failure if the hook fails. Retry The hook execution should be retried until it succeeds. Ignore Any hook failure should be ignored and the deployment should proceed. Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field. Pod-based lifecycle hook Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig object. The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4 1 The helloworld name refers to spec.template.spec.containers[0].name . 2 This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image. 3 env is an optional set of environment variables for the hook container. 4 volumes is an optional set of volume references for the hook container. In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties: The hook command is /usr/bin/command arg1 arg2 . The hook container has the CUSTOM_VAR1=custom_value1 environment variable. The hook failure policy is Abort , meaning the deployment process fails if the hook fails. The hook pod inherits the data volume from the DeploymentConfig object pod. 8.3.5.1. Setting lifecycle hooks You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI. Procedure Use the oc set deployment-hook command to set the type of hook you want: --pre , --mid , or --post . For example, to set a pre-deployment hook: USD oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2 8.4. Using route-based deployment strategies Deployment strategies provide a way for the application to evolve. Some strategies use Deployment objects to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with Deployment objects to impact specific routes. The most common route-based strategy is to use a blue-green deployment . The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version. Alternatively, you can use an A/B versions strategy in which both versions are active at the same time. With this strategy, some users can use version A , and other users can use version B . You can use this strategy to experiment with user interface changes or other features in order to get user feedback. You can also use it to verify proper operation in a production context where problems impact a limited number of users. A canary deployment tests the new version but when a problem is detected it quickly falls back to the version. This can be done with both of the above strategies. The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled. 8.4.1. Proxy shards and traffic splitting In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard , which forwards or splits the traffic it receives to a separate service or application running elsewhere. In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes. Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities. 8.4.2. N-1 compatibility Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem. This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user's browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it. For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional. One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment. 8.4.3. Graceful termination OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit. On shutdown, OpenShift Container Platform sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM , stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed, or gracefully terminate individual connections at the opportunity, before exiting. After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and can be customized per application as necessary. 8.4.4. Blue-green deployments Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route. Because many applications depend on persistent data, you must have an application that supports N-1 compatibility , which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer. Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version. 8.4.4.1. Setting up a blue-green deployment Blue-green deployments use two Deployment objects. Both are running, and the one in production depends on the service the route specifies, with each Deployment object exposed to a different service. Note Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live. If necessary, you can roll back to the older (blue) version by switching the service back to the version. Procedure Create two independent application components. Create a copy of the example application running the v1 image under the example-blue service: USD oc new-app openshift/deployment-example:v1 --name=example-blue Create a second copy that uses the v2 image under the example-green service: USD oc new-app openshift/deployment-example:v2 --name=example-green Create a route that points to the old service: USD oc expose svc/example-blue --name=bluegreen-example Browse to the application at bluegreen-example-<project>.<router_domain> to verify you see the v1 image. Edit the route and change the service name to example-green : USD oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}' To verify that the route has changed, refresh the browser until you see the v2 image. 8.4.5. A/B deployments The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user's reaction to the different versions to inform design decisions. For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together. OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI. 8.4.5.1. Load balancing for A/B testing The user sets up a route with multiple services. Each service handles a version of the application. Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights . The weight for each service is distributed to the service's endpoints so that the sum of the endpoint weights is the service weight . The route can have up to four services. The weight for the service can be between 0 and 256 . When the weight is 0 , the service does not participate in load balancing but continues to serve existing persistent connections. When the service weight is not 0 , each endpoint has a minimum weight of 1 . Because of this, a service with a lot of endpoints can end up with higher weight than intended. In this case, reduce the number of pods to get the expected load balance weight . Procedure To set up the A/B environment: Create the two applications and give them different names. Each creates a Deployment object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version. Create the first application. The following example creates an application called ab-example-a : USD oc new-app openshift/deployment-example --name=ab-example-a Create the second application: USD oc new-app openshift/deployment-example:v2 --name=ab-example-b Both applications are deployed and services are created. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version. USD oc expose svc/ab-example-a Browse to the application at ab-example-a.<project>.<router_domain> to verify that you see the expected version. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route. Note When using alternateBackends , also use the roundrobin load balancing strategy to ensure requests are distributed as expected to the services based on weight. roundrobin can be set for a route by using a route annotation. See the Additional resources section for more information about route annotations. Setting the oc set route-backend to 0 means the service does not participate in load balancing, but continues to serve existing persistent connections. Note Changes to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads. To edit the route, run: USD oc edit route <route_name> Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin # ... spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 # ... 8.4.5.1.1. Managing weights of an existing route using the web console Procedure Navigate to the Networking Routes page. Click the Actions menu to the route you want to edit and select Edit Route . Edit the YAML file. Update the weight to be an integer between 0 and 256 that specifies the relative weight of the target against other target reference objects. The value 0 suppresses requests to this back end. The default is 100 . Run oc explain routes.spec.alternateBackends for more information about the options. Click Save . 8.4.5.1.2. Managing weights of an new route using the web console Navigate to the Networking Routes page. Click Create Route . Enter the route Name . Select the Service . Click Add Alternate Service . Enter a value for Weight and Alternate Service Weight . Enter a number between 0 and 255 that depicts relative weight compared with other targets. The default is 100 . Select the Target Port . Click Create . 8.4.5.1.3. Managing weights using the CLI Procedure To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command: USD oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options] For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2 : USD oc set route-backends ab-example ab-example-a=198 ab-example-b=2 This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b . This command does not scale the deployment. You might be required to do so to have enough pods to handle the request load. Run the command with no flags to verify the current configuration: USD oc set route-backends ab-example Example output NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%) To override the default values for the load balancing algorithm, adjust the annotation on the route by setting the algorithm to roundrobin . For a route on OpenShift Container Platform, the default load balancing algorithm is set to random or source values. To set the algorithm to roundrobin , run the command: USD oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin For Transport Layer Security (TLS) passthrough routes, the default value is source . For all other routes, the default is random . To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed. The following example alters the weight of ab-example-a and ab-example-b services: USD oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 Alternatively, alter the weight of a service by specifying a percentage: USD oc set route-backends ab-example --adjust ab-example-b=5% By specifying + before the percentage declaration, you can adjust a weighting relative to the current setting. For example: USD oc set route-backends ab-example --adjust ab-example-b=+15% The --equal flag sets the weight of all services to 100 : USD oc set route-backends ab-example --equal The --zero flag sets the weight of all services to 0 . All requests then return with a 503 error. Note Not all routers may support multiple or weighted backends. 8.4.5.1.4. One service, multiple Deployment objects Procedure Create a new application, adding a label ab-example=true that will be common to all shards: USD oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA USD oc delete svc/ab-example-a The application is deployed and a service is created. This is the first shard. Make the application available via a route, or use the service IP directly: USD oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true USD oc expose service ab-example Browse to the application at ab-example-<project_name>.<router_domain> to verify you see the v1 image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables: USD oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true USD oc delete svc/ab-example-b At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. To force your browser to one or the other shard: Use the oc scale command to reduce replicas of ab-example-a to 0 . USD oc scale dc/ab-example-a --replicas=0 Refresh your browser to show v2 and shard B (in red). Scale ab-example-a to 1 replica and ab-example-b to 0 : USD oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 Refresh your browser to show v1 and shard A (in blue). If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either Deployment object: USD oc edit dc/ab-example-a or USD oc edit dc/ab-example-b 8.4.6. Additional resources Route-specific annotations . | [
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA",
"oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true",
"oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true",
"oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/building_applications/deployments |
Chapter 4. Using AMQ Management Console | Chapter 4. Using AMQ Management Console AMQ Management Console is a web console included in the AMQ Broker installation that enables you to use a web browser to manage AMQ Broker. AMQ Management Console is based on hawtio . 4.1. Overview AMQ Broker is a full-featured, message-oriented middleware broker. It offers specialized queueing behaviors, message persistence, and manageability. It supports multiple protocols and client languages, freeing you to use many of your application assets. AMQ Broker's key features allow you to: monitor your AMQ brokers and clients view the topology view network health at a glance manage AMQ brokers using: AMQ Management Console Command-line Interface (CLI) Management API The supported web browsers for AMQ Management Console are Firefox and Chrome. For more information on supported browser versions, see AMQ 7 Supported Configurations . 4.2. Configuring local and remote access to AMQ Management Console The procedure in this section shows how to configure local and remote access to AMQ Management Console. Remote access to the console can take one of two forms: Within a console session on a local broker, you use the Connect tab to connect to another, remote broker From a remote host, you connect to the console for the local broker, using an externally-reachable IP address for the local broker Prerequisites You must upgrade to at least AMQ Broker 7.1.0. As part of this upgrade, an access-management configuration file named jolokia-access.xml is added to the broker instance. For more information about upgrading, see Upgrading a Broker instance from 7.0.x to 7.1.0 . Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. Within the web element, observe that the web port is bound only to localhost by default. <web bind="http://localhost:8161" path="web"> <app url="redhat-branding" war="redhat-branding.war"/> <app url="artemis-plugin" war="artemis-plugin.war"/> <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> <app url="console" war="console.war"/> </web> To enable connection to the console for the local broker from a remote host, change the web port binding to a network-reachable interface. For example: <web bind="http://0.0.0.0:8161" path="web"> In the preceding example, by specifying 0.0.0.0 , you bind the web port to all interfaces on the local broker. Save the bootstrap.xml file. Open the <broker_instance_dir> /etc/jolokia-access.xml file. Within the <cors> (that is, Cross-Origin Resource Sharing ) element, add an allow-origin entry for each HTTP origin request header that you want to allow to access the console. For example: <cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors> In the preceding configuration, you specify that the following connections are allowed: Connection from the local host (that is, the host machine for your local broker instance) to the console. The first asterisk ( * ) wildcard character allows either the http or https scheme to be specified in the connection request, based on whether you have configured the console for secure connections. The second asterisk wildcard character allows any port on the host machine to be used for the connection. Connection from a remote host to the console for the local broker, using the externally-reachable IP address of the local broker. In this case, the externally-reachable IP address of the local broker is 192.168.0.49 . Connection from within a console session opened on another, remote broker to the local broker. In this case, the IP address of the remote broker is 192.168.0.51 . Save the jolokia-access.xml file. Open the <broker_instance_dir> /etc/artemis.profile file. To enable the Connect tab in the console, set the value of the Dhawtio.disableProxy argument to false . -Dhawtio.disableProxy=false Important It is recommended that you enable remote connections from the console (that is, set the value of the Dhawtio.disableProxy argument to false ) only if the console is exposed to a secure network. Add a new argument, Dhawtio.proxyWhitelist , to the JAVA_ARGS list of Java system arguments. As a comma-separated list, specify IP addresses for any remote brokers that you want to connect to from the local broker (that is, by using the Connect tab within a console session running on the local broker). For example: -Dhawtio.proxyWhitelist=192.168.0.51 Based on the preceding configuration, you can use the Connect tab within a console session on the local broker to connect to another, remote broker with an IP address of 192.168.0.51 . Save the aretmis.profile file. Additional resources To learn how to access the console, see Section 4.3, "Accessing AMQ Management Console" . For more information about: Cross-Origin Resource Sharing, see W3C Recommendations . Jolokia security, see Jolokia Protocols . Securing connections to the console, see Section 4.4.3, "Securing network access to AMQ Management Console" . 4.3. Accessing AMQ Management Console The procedure in this section shows how to: Open AMQ Management Console from the local broker Connect to other brokers from within a console session on the local broker Open a console instance for the local broker from a remote host using the externally-reachable IP address of the local broker Prerequisites You must have already configured local and remote access to the console. For more information, see Section 4.2, "Configuring local and remote access to AMQ Management Console" . Procedure In your web browser, navigate to the console address for the local broker. The console address is http:// <host:port> /console/login . If you are using the default address, navigate to http://localhost:8161/console/login . Otherwise, use the values of host and port that are defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. Figure 4.1. Console login page Log in to AMQ Management Console using the default user name and password that you created when you created the broker. To connect to another, remote broker from the console session of the local broker: In the left menu, click the Connect tab. In the main pane, on the Remote tab, click the Add connection button. In the Add Connection dialog box, specify the following details: Name Name for the remote connection, for example, my_other_broker . Scheme Protocol to use for the remote connection. Select http for a non-secured connection, or https for a secured connection. Host IP address of a remote broker. You must have already configured console access for this remote broker. Port Port on the local broker to use for the remote connection. Specify the port value that is defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. The default value is 8161 . Path Path to use for console access. Specify console/jolokia . To test the connection, click the Test Connection button. If the connection test is successful, click the Add button. If the connection test fails, review and modify the connection details as needed. Test the connection again. On the Remote page, for a connection that you have added, click the Connect button. A new web browser tab opens for the console instance on the remote broker. In the Log In dialog box, enter the user name and password for the remote broker. Click Log In . The console instance for the remote broker opens. To connect to the console for the local broker from a remote host, specify the Jolokia endpoint for the local broker in a web browser. This endpoint includes the externally-reachable IP address that you specified for the local broker when configuring remote console access. For example: 4.4. Configuring AMQ Management Console Configure user access and request access to resources on the broker. 4.4.1. Securing AMQ Management Console using Red Hat Single Sign-On Prerequisites Red Hat Single Sign-On 7.4 Procedure Configure Red Hat Single Sign-On: Navigate to the realm in Red Hat Single Sign-On that you want to use for securing AMQ Management Console. Each realm in Red Hat Single Sign-On includes a client named Broker . This client is not related to AMQ. Create a new client in Red Hat Single Sign-On, for example artemis-console . Navigate to the client settings page and set: Valid Redirect URIs to the AMQ Management Console URL followed by * , for example: Web Origins to the same value as Valid Redirect URIs . Red Hat Single Sign-On allows you enter + , indicating that allowed CORS origins includes the value for Valid Redirect URIs . Create a role for the client, for example guest . Make sure all users who require access to AMQ Management Console are assigned the above role, for example, using Red Hat Single Sign-On groups. Configure the AMQ Broker instance: Add the following to your <broker-instance-dir> /instances/broker0/etc/login.config file to configure AMQ Management Console to use Red Hat Single Sign-On: Adding this configuration sets up a JAAS principal and a requirement for a bearer token from Red Hat Single Sign-On. The connection to Red Hat Single Sign-On is defined in the keycloak-bearer-token.json file, as described in the step. Create a file <broker-instance-dir> /etc/keycloak-bearer-token.json with the following contents to specify the connection to Red Hat Single Sign-On used for the bearer token exchange: { "realm": " <realm-name> ", "resource": " <client-name> ", "auth-server-url": " <RHSSO-URL> /auth", "principal-attribute": "preferred_username", "use-resource-role-mappings": true, "ssl-required": "external", "confidential-port": 0 } <realm-name> the name of the realm in Red Hat Single Sign-On <client-name> the name of the client in Red Hat Single Sign-On <RHSSO-URL> the URL of Red Hat Single Sign-On Create a file <broker-instance-dir> /etc/keycloak-js-token.json with the following contents to specify the Red Hat Single Sign-On authentication endpoint: { "realm": "<realm-name>", "clientId": "<client-name>", "url": " <RHSSO-URL> /auth" } Configure the security settings by editing the the <broker-instance-dir> /etc/bootstrap.xml file. For example, to allow users with the amq role consume messages and allow users with the guest role send messages, add the following: <security-setting match="Info"> <permission roles="amq" type="createDurableQueue"/> <permission roles="amq" type="deleteDurableQueue"/> <permission roles="amq" type="createNonDurableQueue"/> <permission roles="amq" type="deleteNonDurableQueue"/> <permission roles="guest" type="send"/> <permission roles="amq" type="consume"/> </security-setting> Run the AMQ Broker instance and validate AMQ Management Console configuration. 4.4.2. Setting up user access to AMQ Management Console You can access AMQ Management Console using the broker login credentials. The following table provides information about different methods to add additional broker users to access AMQ Management Console: Authentication Method Description Guest authentication Enables anonymous access. In this configuration, any user who connects without credentials or with the wrong credentials will be authenticated automatically and assigned a specific user and role. For more information, see Configuring guest access in Configuring AMQ Broker . Basic user and password authentication For each user, you must define a username and password and assign a security role. Users can only log into AMQ Management Console using these credentials. For more information, see Configuring basic user and password authentication in Configuring AMQ Broker . LDAP authentication Users are authenticated and authorized by checking the credentials against user data stored in a central X.500 directory server. For more information, see Configuring LDAP to authenticate clients in Configuring AMQ Broker . 4.4.3. Securing network access to AMQ Management Console To secure AMQ Management Console when the console is being accessed over a WAN or the internet, use SSL to specify that network access uses https instead of http . Prerequisites The following should be located in the <broker_instance_dir> /etc/ directory: Java key store Java trust store (needed only if you require client authentication) Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the following attributes: <web bind="https://0.0.0.0:8161" path="web" keyStorePath="<path_to_keystore>" keyStorePassword="<password>" clientAuth="<true/false>" trustStorePath="<path_to_truststore>" trustStorePassword="<password>"> ... </web> bind For secure connections to the console, change the URI scheme to https . keyStorePath Path of the keystore file. For example: keyStorePath=" <broker_instance_dir> /etc/keystore.jks" keyStorePassword Key store password. This password can be encrypted. clientAuth Specifies whether client authentication is required. The default value is false . trustStorePath Path of the trust store file. You need to define this attribute only if clientAuth is set to true . trustStorePassword Trust store password. This password can be encrypted. Additional resources For more information about encrypting passwords in broker configuration files, including bootstrap.xml , see Encrypting Passwords in Configuration Files . 4.5. Managing brokers using AMQ Management Console You can use AMQ Management Console to view information about a running broker and manage the following resources: Incoming network connections (acceptors) Addresses Queues 4.5.1. Viewing details about the broker To see how the broker is configured, in the left menu, click Artemis . In the folder tree, the local broker is selected by default. In the main pane, the following tabs are available: Status Displays information about the current status of the broker, such as uptime and cluster information. Also displays the amount of address memory that the broker is currently using. The graph shows this value as a proportion of the global-max-size configuration parameter. Figure 4.2. Status tab Connections Displays information about broker connections, including client, cluster, and bridge connections. Sessions Displays information about all sessions currently open on the broker. Consumers Displays information about all consumers currently open on the broker. Producers Displays information about producers currently open on the broker. Addresses Displays information about addresses on the broker. This includes internal addresses, such as store-and-forward addresses. Queues Displays information about queues on the broker. This includes internal queues, such as store-and-forward queues. Attributes Displays detailed information about attributes configured on the broker. Operations Displays JMX operations that you can execute on the broker from the console. When you click an operation, a dialog box opens that enables you to specify parameter values for the operation. Chart Displays real-time data for attributes configured on the broker. You can edit the chart to specify the attributes that are included in the chart. Broker diagram Displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker. 4.5.2. Viewing the broker diagram You can view a diagram of all AMQ Broker resources in your topology, including brokers (live and backup brokers), producers and consumers, addresses, and queues. Procedure In the left menu, click Artemis . In the main pane, click the Broker diagram tab. The console displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker, as shown in the figure. Figure 4.3. Broker diagram tab To change what items are displayed on the diagram, use the check boxes at the top of the diagram. Click Refresh . To show attributes for the local broker or an address or queue that is connected to it, click that node in the diagram. For example, the following figure shows a diagram that also includes attributes for the local broker. Figure 4.4. Broker diagram tab, including attributes 4.5.3. Viewing acceptors You can view details about the acceptors configured for the broker. Procedure In the left menu, click Artemis . In the folder tree, click acceptors . To view details about how an acceptor is configured, click the acceptor. The console shows the corresponding attributes on the Attributes tab, as shown in the figure. Figure 4.5. AMQP acceptor attributes To see complete details for an attribute, click the attribute. An additional window opens to show the details. 4.5.4. Managing addresses and queues An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. 4.5.4.1. Creating addresses A typical address is given a unique name, zero or more queues, and a routing type. A routing type determines how messages are sent to the queues associated with an address. Addresses can be configured with two different routing types. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner. Anycast Every queue within the matching address, in a publish-subscribe manner. Multicast You can create and configure addresses and queues, and then delete them when they are no longer in use. Procedure In the left menu, click Artemis . In the folder tree, click addresses . In the main pane, click the Create address tab. A page appears for you to create an address, as shown in the figure. Figure 4.6. Create Address page Complete the following fields: Address name The routing name of the address. Routing type Select one of the following options: Multicast : Messages sent to the address will be distributed to all subscribers in a publish-subscribe manner. Anycast : Messages sent to this address will be distributed to only one subscriber in a point-to-point manner. Both : Enables you to define more than one routing type per address. This typically results in an anti-pattern and is not recommended. Note If an address does use both routing types, and the client does not show a preference for either one, the broker defaults to the anycast routing type. The one exception is when the client uses the MQTT protocol. In that case, the default routing type is multicast . Click Create Address . 4.5.4.2. Sending messages to an address The following procedure shows how to use the console to send a message to an address. Procedure In the left menu, click Artemis . In the folder tree, select an address. On the navigation bar in the main pane, click More Send message . A page appears for you to create a message, as shown in the figure. Figure 4.7. Send Message page If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and then click Send message . 4.5.4.3. Creating queues Queues provide a channel between a producer and a consumer. Prerequisites The address to which you want to bind the queue must exist. To learn how to use the console to create an address, see Section 4.5.4.1, "Creating addresses" . Procedure In the left menu, click Artemis . In the folder tree, select the address to which you want to bind the queue. In the main pane, click the Create queue tab. A page appears for you to create a queue, as shown in the figure. Figure 4.8. Create Queue page Complete the following fields: Queue name A unique name for the queue. Routing type Select one of the following options: Multicast : Messages sent to the parent address will be distributed to all queues bound to the address. Anycast : Only one queue bound to the parent address will receive a copy of the message. Messages will be distributed evenly among all of the queues bound to the address. Durable If you select this option, the queue and its messages will be persistent. Filter The username to be used when connecting to the broker. Max Consumers The maximum number of consumers that can access the queue at a given time. Purge when no consumers If selected, the queue will be purged when no consumers are connected. Click Create Queue . 4.5.4.4. Checking the status of a queue Charts provide a real-time view of the status of a queue on a broker. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Chart tab. The console displays a chart that shows real-time data for all of the queue attributes. Figure 4.9. Chart tab for a queue Note To view a chart for multiple queues on an address, select the anycast or multicast folder that contains the queues. If necessary, select different criteria for the chart: In the main pane, click Edit . On the Attributes list, select one or more attributes that you want to include in the chart. To select multiple attributes, press and hold the Ctrl key and select each attribute. Click the View Chart button. The chart is updated based on the attributes that you selected. 4.5.4.5. Browsing queues Browsing a queue displays all of the messages in the queue. You can also filter and sort the list to find specific messages. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. Queues are located within the addresses to which they are bound. On the navigation bar in the main pane, click More Browse queue . The messages in the queue are displayed. By default, the first 200 messages are displayed. Figure 4.10. Browse Queue page To browse for a specific message or group of messages, do one of the following: To... Do this... Filter the list of messages In the Filter... text field, enter filter criteria. Click the search (that is, magnifying glass) icon. Sort the list of messages In the list of messages, click a column header. To sort the messages in descending order, click the header a second time. To view the content of a message, click the Show button. You can view the message header, properties, and body. 4.5.4.6. Sending messages to a queue After creating a queue, you can send a message to it. The following procedure outlines the steps required to send a message to an existing queue. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Send message tab. A page appears for you to compose the message. Figure 4.11. Send Message page for a queue If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and click Send message . 4.5.4.7. Resending messages to a queue You can resend previously sent messages. Procedure Browse for the message you want to resend . Click the check box to the message that you want to resend. Click the Resend button. The message is displayed. Update the message header and body as needed, and then click Send message . 4.5.4.8. Moving messages to a different queue You can move one or more messages in a queue to a different queue. Procedure Browse for the messages you want to move . Click the check box to each message that you want to move. In the navigation bar, click Move Messages . A confirmation dialog box appears. From the drop-down menu, select the name of the queue to which you want to move the messages. Click Move . 4.5.4.9. Deleting messages or queues You can delete a queue or purge all of the messages from a queue. Procedure Browse for the queue you want to delete or purge . Do one of the following: To... Do this... Delete a message from the queue Click the check box to each message that you want to delete. Click the Delete button. Purge all messages from the queue On the navigation bar in the main pane, click Delete queue . Click the Purge Queue button. Delete the queue On the navigation bar in the main pane, click Delete queue . Click the Delete Queue button. | [
"<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </web>",
"<web bind=\"http://0.0.0.0:8161\" path=\"web\">",
"<cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors>",
"-Dhawtio.disableProxy=false",
"-Dhawtio.proxyWhitelist=192.168.0.51",
"http://192.168.0.49/console/jolokia",
"https://broker.example.com:8161/console/*",
"console { org.keycloak.adapters.jaas.BearerTokenLoginModule required keycloak-config-file=\"USD{artemis.instance}/etc/keycloak-bearer-token.json\" role-principal-class=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal ; };",
"{ \"realm\": \" <realm-name> \", \"resource\": \" <client-name> \", \"auth-server-url\": \" <RHSSO-URL> /auth\", \"principal-attribute\": \"preferred_username\", \"use-resource-role-mappings\": true, \"ssl-required\": \"external\", \"confidential-port\": 0 }",
"{ \"realm\": \"<realm-name>\", \"clientId\": \"<client-name>\", \"url\": \" <RHSSO-URL> /auth\" }",
"<security-setting match=\"Info\"> <permission roles=\"amq\" type=\"createDurableQueue\"/> <permission roles=\"amq\" type=\"deleteDurableQueue\"/> <permission roles=\"amq\" type=\"createNonDurableQueue\"/> <permission roles=\"amq\" type=\"deleteNonDurableQueue\"/> <permission roles=\"guest\" type=\"send\"/> <permission roles=\"amq\" type=\"consume\"/> </security-setting>",
"<web bind=\"https://0.0.0.0:8161\" path=\"web\" keyStorePath=\"<path_to_keystore>\" keyStorePassword=\"<password>\" clientAuth=\"<true/false>\" trustStorePath=\"<path_to_truststore>\" trustStorePassword=\"<password>\"> </web>",
"keyStorePath=\" <broker_instance_dir> /etc/keystore.jks\""
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/managing_amq_broker/assembly-using-AMQ-console-managing |
Chapter 3. Configuring IAM for IBM Cloud VPC | Chapter 3. Configuring IAM for IBM Cloud VPC In environments where the cloud identity and access management (IAM) APIs are not reachable, you must put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. Storing an administrator-level credential secret in the cluster kube-system project is not supported for IBM Cloud; therefore, you must set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources About the Cloud Credential Operator 3.2. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys for IBM Cloud VPC 3.3. steps Installing a cluster on IBM Cloud VPC with customizations 3.4. Additional resources Preparing to update a cluster with manually maintained credentials | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"ccoctl --help",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_cloud_vpc/configuring-iam-ibm-cloud |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.