title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 18. Red Hat Quay garbage collection | Chapter 18. Red Hat Quay garbage collection Red Hat Quay includes automatic and continuous image garbage collection. Garbage collection ensures efficient use of resources for active objects by removing objects that occupy sizeable amounts of disk space, such as dangling or untagged images, repositories, and blobs, including layers and manifests. Garbage collection performed by Red Hat Quay can reduce downtime in your organization's environment. 18.1. Red Hat Quay garbage collection in practice Currently, all garbage collection happens discreetly, and there are no commands to manually run garbage collection. Red Hat Quay provides metrics that track the status of the different garbage collection workers. For namespace and repository garbage collection, the progress is tracked based on the size of their respective queues. Namespace and repository garbage collection workers require a global lock to work. As a result, and for performance reasons, only one worker runs at a time. Note Red Hat Quay shares blobs between namespaces and repositories in order to conserve disk space. For example, if the same image is pushed 10 times, only one copy of that image will be stored. It is possible that tags can share their layers with different images already stored somewhere in Red Hat Quay. In that case, blobs will stay in storage, because deleting shared blobs would make other images unusable. Blob expiration is independent of the time machine. If you push a tag to Red Hat Quay and the time machine is set to 0 seconds, and then you delete a tag immediately, garbage collection deletes the tag and everything related to that tag, but will not delete the blob storage until the blob expiration time is reached. Garbage collecting tagged images works differently than garbage collection on namespaces or repositories. Rather than having a queue of items to work with, the garbage collection workers for tagged images actively search for a repository with inactive or expired tags to clean up. Each instance of garbage collection workers will grab a repository lock, which results in one worker per repository. Note In Red Hat Quay, inactive or expired tags are manifests without tags because the last tag was deleted or it expired. The manifest stores information about how the image is composed and stored in the database for each individual tag. When a tag is deleted and the allotted time from Time Machine has been met, Red Hat Quay garbage collects the blobs that are not connected to any other manifests in the registry. If a particular blob is connected to a manifest, then it is preserved in storage and only its connection to the manifest that is being deleted is removed. Expired images will disappear after the allotted time, but are still stored in Red Hat Quay. The time in which an image is completely deleted, or collected, depends on the Time Machine setting of your organization. The default time for garbage collection is 14 days unless otherwise specified. Until that time, tags can be pointed to an expired or deleted images. For each type of garbage collection, Red Hat Quay provides metrics for the number of rows per table deleted by each garbage collection worker. The following image shows an example of how Red Hat Quay monitors garbage collection with the same metrics: 18.1.1. Measuring storage reclamation Red Hat Quay does not have a way to track how much space is freed up by garbage collection. Currently, the best indicator of this is by checking how many blobs have been deleted in the provided metrics. Note The UploadedBlob table in the Red Hat Quay metrics tracks the various blobs that are associated with a repository. When a blob is uploaded, it will not be garbage collected before the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC parameter. This is to avoid prematurely deleting blobs that are part of an ongoing push. For example, if garbage collection is set to run often, and a tag is deleted in the span of less than one hour, then it is possible that the associated blobs will not get cleaned up immediately. Instead, and assuming that the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC parameter has passed, the associated blobs will be removed the time garbage collection runs on that same repository. 18.2. Garbage collection configuration fields The following configuration fields are available to customize what is garbage collected, and the frequency at which garbage collection occurs: Name Description Schema FEATURE_GARBAGE_COLLECTION Whether garbage collection is enabled for image tags. Defaults to true . Boolean FEATURE_NAMESPACE_GARBAGE_COLLECTION Whether garbage collection is enabled for namespaces. Defaults to true . Boolean FEATURE_REPOSITORY_GARBAGE_COLLECTION Whether garbage collection is enabled for repositories. Defaults to true . Boolean GARBAGE_COLLECTION_FREQUENCY The frequency, in seconds, at which the garbage collection worker runs. Affects only garbage collection workers. Defaults to 30 seconds. String PUSH_TEMP_TAG_EXPIRATION_SEC The number of seconds that blobs will not be garbage collected after being uploaded. This feature prevents garbage collection from cleaning up blobs that are not referenced yet, but still used as part of an ongoing push. String TAG_EXPIRATION_OPTIONS List of valid tag expiration values. String DEFAULT_TAG_EXPIRATION Tag expiration time for time machine. String CLEAN_BLOB_UPLOAD_FOLDER Automatically cleans stale blobs left over from an S3 multipart upload. By default, blob files older than two days are cleaned up every hour. Boolean + Default: true 18.3. Disabling garbage collection The garbage collection features for image tags, namespaces, and repositories are stored in the config.yaml file. These features default to true . In rare cases, you might want to disable garbage collection, for example, to control when garbage collection is performed. You can disable garbage collection by setting the GARBAGE_COLLECTION features to false . When disabled, dangling or untagged images, repositories, namespaces, layers, and manifests are not removed. This might increase the downtime of your environment. Note There is no command to manually run garbage collection. Instead, you would disable, and then re-enable, the garbage collection feature. 18.4. Garbage collection and quota management Red Hat Quay introduced quota management in 3.7. With quota management, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. As of Red Hat Quay 3.7, garbage collection reclaims memory that was allocated to images, repositories, and blobs after deletion. Because the garbage collection feature reclaims memory after deletion, there is a discrepancy between what is stored in an environment's disk space and what quota management is reporting as the total consumption. There is currently no workaround for this issue. 18.5. Garbage collection in practice Use the following procedure to check your Red Hat Quay logs to ensure that garbage collection is working. Procedure Enter the following command to ensure that garbage collection is properly working: USD sudo podman logs <container_id> Example output: gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], run at: 2022-11-14 18:47:22 UTC)" executed successfully Delete an image tag. Enter the following command to ensure that the tag was deleted: USD podman logs quay-app Example output: gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] "DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0" 204 0 "http://quay-server.example.com/repository/quayadmin/busybox?tab=tags" "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" 18.6. Red Hat Quay garbage collection metrics The following metrics show how many resources have been removed by garbage collection. These metrics show how many times the garbage collection workers have run and how many namespaces, repositories, and blobs were removed. Metric name Description quay_gc_iterations_total Number of iterations by the GCWorker quay_gc_namespaces_purged_total Number of namespaces purged by the NamespaceGCWorker quay_gc_repos_purged_total Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker quay_gc_storage_blobs_deleted_total Number of storage blobs deleted Sample metrics output # TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09 ... # HELP quay_gc_iterations_total number of iterations by the GCWorker # TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09 ... # HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker # TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 .... # TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09 ... # HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker # TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09 ... # HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted # TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... | [
"sudo podman logs <container_id>",
"gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job \"GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2022-11-14 18:47:22 UTC)\" executed successfully",
"podman logs quay-app",
"gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] \"DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0\" 204 0 \"http://quay-server.example.com/repository/quayadmin/busybox?tab=tags\" \"Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\"",
"TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189714e+09 HELP quay_gc_iterations_total number of iterations by the GCWorker TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189433e+09 HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 . TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.631782319018925e+09 HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189059e+09 HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/garbage-collection |
7.18. brltty | 7.18. brltty 7.18.1. RHBA-2012:1231 - brltty bug fix update Updated brltty packages that fix two bugs are now available for Red Hat Enterprise Linux 6. BRLTTY is a background process (daemon) which provides access to the Linux console (when in text mode) for a blind person using a refreshable braille display. It drives the braille display, and provides complete screen review functionality. Bug Fixes BZ# 684526 Previously, building the brltty package could fail on the ocaml's unpackaged files error. This happened only if the ocaml package was pre-installed in the build root. The "--disable-caml-bindings" option has been added in the %configure macro so that the package now builds correctly. BZ#809326 Previously, the /usr/lib/libbrlapi.so symbolic link installed by the brlapi-devel package incorrectly pointed to ../../lib/libbrlapi.so. The link has been fixed to correctly point to ../../lib/libbrlapi.so.0.5. All users of brltty are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/brltty |
16.6. Synchronizing Groups | 16.6. Synchronizing Groups Like user entries, groups are not automatically synchronized between Directory Server and Active Directory. Synchronization both directions has to be configured: Groups in the Active Directory domain are synchronized if it is configured in the sync agreement by selecting the Sync New Windows Groups option. All of the Windows groups are copied to the Directory Server when synchronization is initiated and then new groups are synchronized over as they are created. A Directory Server group account is synchronized to Active Directory through specific attributes that are present on the Directory Server entry. Any Directory Server entry must have the ntGroup object class and the ntGroupCreateNewGroup attribute; the ntGroupCreateNewGroup attribute (even on an existing entry) signals Directory Server Windows Synchronization to write the entry over to the Active Directory server. New or modified groups that have the ntGroup object class are created and synchronized over to the Windows machine at the regular update. Important When a group is synchronized, the list of all of its members is also synchronized. However, the member entries themselves are not synchronized unless user sync is enabled and applies to those entries. This could create a problem when an application or service tries to do a modify operation on all members in a group on the Active Directory server, if some of those users do not exist. Additionally, groups have a few other common attributes: Two attributes control whether Directory Server groups are created and deleted on Active Directory, ntGroupCreateNewGroup and ntGroupDeleteGroup . ntGroupCreateNewGroup is required to sync Directory Server groups over to Active Directory. ntUserDomainId contains the unique ID for the entry on the Active Directory domain. This is the only required attribute for the ntGroup object class. ntGroupType is the type of Windows group. Windows group types are global/security, domain local/security, builtin, universal/security, global/distribution, domain local/distribution, or universal/distribution. This is set automatically for Windows groups that are synchronized over, but this attribute must be set manually on Directory Server entries before they can be synchronized. 16.6.1. About Windows Group Types In Active Directory, there are two major types of groups: security and distribution. Security groups are most similar to groups in Directory Server, since security groups can have policies configured for access controls, resource restrictions, and other permissions. Distribution groups are for mailing distribution. These are further broken down into global and local groups. The Directory Server ntGroupType supports all four group types: -2147483646 for global/security (the default) -2147483644 for domain local/security -2147483643 for builtin -2147483640 for universal/security 2 for global/distribution 4 for domain local/distribution 8 for universal/distribution 16.6.2. Group Attributes Synchronized between Directory Server and Active Directory Only a subset of Directory Server and Active Directory attributes are synchronized. These attributes are hard-coded and are defined regardless of which way the entry is being synchronized. Any other attributes present in the entry, either in Directory Server or in Active Directory, remain unaffected by synchronization. Some attributes used in Directory Server and Active Directory group entries are identical. These are usually attributes defined in an LDAP standard, which are common among all LDAP services. These attributes are synchronized to one another exactly. Table 16.4, "Group Entry Attributes That Are the Same between Directory Server and Active Directory" shows attributes that are the same between the Directory Server and Windows servers. Some attributes define the same information, but the names of the attributes or their schema definitions are different. These attributes are mapped between Active Directory and Directory Server, so that attribute A in one server is treated as attribute B in the other. For synchronization, many of these attributes relate to Windows-specific information. Table 16.3, "Group Entry Attribute Mapping between Directory Server and Active Directory" shows the attributes that are mapped between the Directory Server and Windows servers. For more information on the differences in ways that Directory Server and Active Directory handle some schema elements, see Section 16.6.3, "Group Schema Differences between Red Hat Directory Server and Active Directory" . Table 16.3. Group Entry Attribute Mapping between Directory Server and Active Directory Directory Server Active Directory cn name ntUserDomainID name ntGroupType groupType uniqueMember member Member [a] [a] The Member attribute in Active Directory is synchronized to the uniqueMember attribute in Directory Server. Table 16.4. Group Entry Attributes That Are the Same between Directory Server and Active Directory cn o description ou l seeAlso mail 16.6.3. Group Schema Differences between Red Hat Directory Server and Active Directory Although Active Directory supports the same basic X.500 object classes as Directory Server, there are a few incompatibilities of which administrators should be aware. Nested groups (where a group contains another group as a member) are supported and for Windows Synchronization are synchronized. However, Active Directory imposes certain constraints as to the composition of nested groups. For example, a global group is not allowed to contain a domain local group as a member. Directory Server has no concept of local and global groups, and, therefore, it is possible to create entries on the Directory Server side that violate Active Directory's constraints when synchronized. 16.6.4. Configuring Group Synchronization for Directory Server Groups For Directory Server groups to be synchronized over to Active Directory, the group entries must have the appropriate sync attributes set. To enable synchronization through the command line, add the required sync attributes to an entry or create an entry with those attributes. Three schema elements are required for synchronization: The ntGroup object class. The ntUserDomainId attribute, to give the Windows ID for the entry. The ntGroupCreateNewGroup attribute, to signal to the synchronization plug-in to sync the Directory Server entry over to Active Directory. The ntGroupDeleteGroup attribute is optional, but this sets whether to delete the entry automatically from the Active Directory domain if it is deleted in the Directory Server. It is also recommended to add the ntGroupType attribute. If this attribute is not specified, then the group is automatically added as a global security group ( ntGroupType:-2147483646 ). For example, using ldapmodify : Many additional Windows and group attributes can be added to the entry. All of the schema which is synchronized is listed in Section 16.6.2, "Group Attributes Synchronized between Directory Server and Active Directory" . Windows-specific attributes, belonging to the ntGroup object class, are described in more detail in the Red Hat Directory Server 11 Configuration, Command, and File Reference . 16.6.5. Configuring Group Synchronization for Active Directory Groups Synchronization for Windows users (users which originate in the Active Directory domain) is configured in the sync agreement. To enable group synchronization: To disable group synchronization, set the --sync-groups option to off . | [
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=Example Group,ou=Groups,dc=example,dc=com changetype: modify add: objectClass objectClass:ntGroup - add: ntUserDomainId ntUserDomainId: example-group - add: ntGroupCreateNewGroup ntGroupCreateNewGroup: true - add: ntGroupDeleteGroup ntGroupDeleteGroup: true - add: ntGroupType ntGroupType: 2",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-winsync-agmt set --sync-groups=\"on\" --suffix=\" dc=example,dc=com \" example-agreement"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/using_windows_sync-synchronizing_groups |
Chapter 6. Troubleshooting the Bare Metal Provisioning service | Chapter 6. Troubleshooting the Bare Metal Provisioning service Use the following procedures to diagnose issues in a red Hat OpenStack on OpenShift (RHOSO) environment that includes the Bare Metal Provisioning service (ironic). 6.1. Querying node event history records You can query the node event history records to identify issues with bare-metal nodes when an operation fails. Procedure Open a remote shell connection to the OpenStackClient pod: View the event history for a particular node: This command returns a list of the error events and node state transitions that occurred on the node. Each event is identified with an event UUID. View the details of a particular event that occurred on the node: Exit the openstackclient pod: | [
"oc rsh -n openstack openstackclient",
"openstack baremetal node history list <node_id>",
"openstack baremetal node history get <node_id> <event_uuid>",
"exit"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_bare_metal_provisioning_service/assembly_troubleshooting-the-bare-metal-provisioning-service |
Chapter 30. Automation controller tips and tricks | Chapter 30. Automation controller tips and tricks Use the automation controller CLI Tool Change the automation controller Admin Password Create an automation controller Admin from the commandline Set up a jump host to use with automation controller View Ansible outputs for JSON commands when using automation controller Locate and configure the Ansible configuration file View a listing of all ansible_ variables The ALLOW_JINJA_IN_EXTRA_VARS variable Configure the controllerhost hostname for notifications Launch Jobs with curl Filter instances returned by the dynamic inventory sources in automation controller Use an unreleased module from Ansible source with automation controller Use callback plugins with automation controller Connect to Windows with winrm Import existing inventory files and host/group vars into automation controller 30.1. The automation controller CLI Tool Automation controller has a full-featured command line interface. For more information on configuration and use, see the AWX Command Line Interface and the AWX manage utility section. 30.2. Change the automation controller Administrator Password During the installation process, you are prompted to enter an administrator password that is used for the admin superuser or system administrator created by automation controller. If you log in to the instance by using SSH, it tells you the default administrator password in the prompt. If you need to change this password at any point, run the following command as root on the automation controller server: awx-manage changepassword admin , enter a new password. After that, the password you have entered works as the administrator password in the web UI. To set policies at creation time for password validation using Django, see Django password policies . 30.3. Create an automation controller Administrator from the command line Occasionally you might find it helpful to create a system administrator (superuser) account from the command line. To create a superuser, run the following command as root on the automation controller server and enter the administrator information as prompted: awx-manage createsuperuser 30.4. Set up a jump host to use with automation controller Credentials supplied by automation controller do not flow to the jump host through ProxyCommand. They are only used for the end-node when the tunneled connection is set up. You can configure a fixed user/keyfile in the AWX user's SSH configuration in the ProxyCommand definition that sets up the connection through the jump host. For example: Host tampa Hostname 10.100.100.11 IdentityFile [privatekeyfile] Host 10.100.. Proxycommand ssh -W [jumphostuser]@%h:%p tampa You can also add a jump host to your automation controller instance through Inventory variables. These variables can be set at either the inventory, group, or host level. To add this, navigate to your inventory and in the variables field of whichever level you choose, add the following variables: ansible_user: <user_name> ansible_connection: ssh ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q <user_name>@<jump_server_name>"' 30.5. View Ansible outputs for JSON commands when using automation controller When working with automation controller, you can use the API to obtain the Ansible outputs for commands in JSON format. To view the Ansible outputs, browse to https://<controller server name>/api/v2/jobs/<job_id>/job_events/ 30.6. Locate and configure the Ansible configuration file While Ansible does not require a configuration file, OS packages often include a default one in /etc/ansible/ansible.cfg for possible customization. To use a custom ansible.cfg file, place it at the root of your project. Automation controller runs ansible-playbook from the root of the project directory, where it finds the custom ansible.cfg file. Note An ansible.cfg file anywhere else in the project is ignored. To learn which values you can use in this file, see Generating a sample ansible.cfg file . Using the defaults are acceptable for starting out, but you can configure the default module path or connection type here, as well as other things. Automation controller overrides some ansible.cfg options. For example, automation controller stores the SSH ControlMaster sockets, the SSH agent socket, and any other per-job run items in a per-job temporary directory that is passed to the container used for job execution. 30.7. View a listing of all ansible_ variables By default, Ansible gathers "facts" about the machines under its management, accessible in Playbooks and in templates. To view all facts available about a machine, run the setup module as an ad hoc action: ansible -m setup hostname This prints out a dictionary of all facts available for that particular host. For more information, see information-discovered-from-systems-facts . 30.8. The ALLOW_JINJA_IN_EXTRA_VARS variable Setting ALLOW_JINJA_IN_EXTRA_VARS = template only works for saved job template extra variables. Prompted variables and survey variables are excluded from the 'template'. This parameter has three values: template to allow usage of Jinja saved directly on a job template definition (the default). never to disable all Jinja usage (recommended). always to always allow Jinja (strongly discouraged, but an option for prior compatibility). This parameter is configurable in the Jobs Settings page of the automation controller UI. 30.9. Configuring the controllerhost hostname for notifications In System settings , you can replace https://controller.example.com in the Base URL of The Controller Host field with your preferred hostname to change the notification hostname. Refreshing your automation controller license also changes the notification hostname. New installations of automation controller need not set the hostname for notifications. 30.10. Launching Jobs with curl Launching jobs with the automation controller API is simple. The following are some easy to follow examples using the curl tool. Assuming that your Job Template ID is '1', your controller IP is 192.168.42.100, and that admin and awxsecret are valid login credentials, you can create a new job this way: curl -f -k -H 'Content-Type: application/json' -XPOST \ --user admin:awxsecret \ ht p://192.168.42.100/api/v2/job_templates/1/launch/ This returns a JSON object that you can parse and use to extract the 'id' field, which is the ID of the newly created job. You can also pass extra variables to the Job Template call, as in the following example: curl -f -k -H 'Content-Type: application/json' -XPOST \ -d '{"extra_vars": "{\"foo\": \"bar\"}"}' \ --user admin:awxsecret http://192.168.42.100/api/v2/job_templates/1/launch/ Note The extra_vars parameter must be a string which contains JSON, not just a JSON dictionary. Use caution when escaping the quotes, etc. 30.11. Filtering instances returned by the dynamic inventory sources in the controller By default, the dynamic inventory sources in automation controller (such as AWS and Google) return all instances available to the cloud credentials being used. They are automatically joined into groups based on various attributes. For example, AWS instances are grouped by region, by tag name, value, and security groups. To target specific instances in your environment, write your playbooks so that they target the generated group names. For example: --- - hosts: tag_Name_webserver tasks: ... You can also use the Limit field in the Job Template settings to limit a playbook run to a certain group, groups, hosts, or a combination of them. The syntax is the same as the --limit parameter on the ansible-playbook command line. You can also create your own groups by copying the auto-generated groups into your custom groups. Make sure that the Overwrite option is disabled on your dynamic inventory source, otherwise subsequent synchronization operations delete and replace your custom groups. 30.12. Use an unreleased module from Ansible source with automation controller If there is a feature that is available in the latest Ansible core branch that you want to use with your automation controller system, making use of it in automation controller is simple. First, determine which is the updated module you want to use from the available Ansible Core Modules or Ansible Extra Modules GitHub repositories. , create a new directory, at the same directory level of your Ansible source playbooks, named /library . When this is created, copy the module you want to use and drop it into the /library directory. It is consumed first by your system modules and can be removed once you have updated the stable version with your normal package manager. 30.13. Use callback plugins with automation controller Ansible has a flexible method of handling actions during playbook runs, called callback plugins. You can use these plugins with automation controller to do things such as notify services upon playbook runs or failures, or send emails after every playbook run. For official documentation on the callback plugin architecture, see Developing plugins . Note Automation controller does not support the stdout callback plugin because Ansible only permits one, and it is already being used for streaming event data. You might also want to review some example plugins, which should be modified for site-specific purposes, such as those available at: https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback To use these plugins, put the callback plugin .py file into a directory called /callback_plugins alongside your playbook in your automation controller Project. Then, specify their paths (one path per line) in the Ansible Callback Plugins field of the Job settings: Note To have most callbacks shipped with Ansible applied globally, you must add them to the callback_whitelist section of your ansible.cfg . If you have custom callbacks, see Enabling callback plugins . 30.14. Connect to Windows with winrm By default, automation controller attempts to ssh to hosts. You must add the winrm connection information to the group variables to which the Windows hosts belong. To get started, edit the Windows group in which the hosts reside and place the variables in the source or edit screen for the group. To add winrm connection info: Edit the properties for the selected group by clicking on the Edit icon of the group name that contains the Windows servers. In the "variables" section, add your connection information as follows: ansible_connection: winrm When complete, save your edits. If Ansible was previously attempting an SSH connection and failed, you should re-run the job template. 30.15. Import existing inventory files and host/group vars into automation controller To import an existing static inventory and the accompanying host and group variables into automation controller, your inventory must be in a structure similar to the following: inventory/ |-- group_vars | `-- mygroup |-- host_vars | `-- myhost `-- hosts To import these hosts and vars, run the awx-manage command: awx-manage inventory_import --source=inventory/ \ --inventory-name="My Controller Inventory" If you only have a single flat file of inventory, a file called ansible-hosts, for example, import it as follows: awx-manage inventory_import --source=./ansible-hosts \ --inventory-name="My Controller Inventory" In case of conflicts or to overwrite an inventory named "My Controller Inventory", run: awx-manage inventory_import --source=inventory/ \ --inventory-name="My Controller Inventory" \ --overwrite --overwrite-vars If you receive an error, such as: ValueError: need more than 1 value to unpack Create a directory to hold the hosts file, as well as the group_vars: mkdir -p inventory-directory/group_vars Then, for each of the groups that have :vars listed, create a file called inventory-directory/group_vars/<groupname> and format the variables in YAML format. The importer then handles the conversion correctly. | [
"awx-manage changepassword admin",
"awx-manage createsuperuser",
"Host tampa Hostname 10.100.100.11 IdentityFile [privatekeyfile] Host 10.100.. Proxycommand ssh -W [jumphostuser]@%h:%p tampa",
"ansible_user: <user_name> ansible_connection: ssh ansible_ssh_common_args: '-o ProxyCommand=\"ssh -W %h:%p -q <user_name>@<jump_server_name>\"'",
"ansible -m setup hostname",
"curl -f -k -H 'Content-Type: application/json' -XPOST --user admin:awxsecret ht p://192.168.42.100/api/v2/job_templates/1/launch/",
"curl -f -k -H 'Content-Type: application/json' -XPOST -d '{\"extra_vars\": \"{\\\"foo\\\": \\\"bar\\\"}\"}' --user admin:awxsecret http://192.168.42.100/api/v2/job_templates/1/launch/",
"--- - hosts: tag_Name_webserver tasks:",
"inventory/ |-- group_vars | `-- mygroup |-- host_vars | `-- myhost `-- hosts",
"awx-manage inventory_import --source=inventory/ --inventory-name=\"My Controller Inventory\"",
"awx-manage inventory_import --source=./ansible-hosts --inventory-name=\"My Controller Inventory\"",
"awx-manage inventory_import --source=inventory/ --inventory-name=\"My Controller Inventory\" --overwrite --overwrite-vars",
"ValueError: need more than 1 value to unpack",
"mkdir -p inventory-directory/group_vars"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/controller-tips-and-tricks |
High Availability for Compute Instances | High Availability for Compute Instances Red Hat OpenStack Platform 16.0 Configure High Availability for Compute Instances OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/high_availability_for_compute_instances/index |
A.4. Investigating Smart Card Authentication Failures | A.4. Investigating Smart Card Authentication Failures Open the /etc/sssd/sssd.conf file, and set the debug_level option to 2 . Review the sssd_pam.log and sssd_ EXAMPLE.COM .log files. If you see timeout error message in the files, see Section B.4.4, "Smart Card Authentication Fails with Timeout Error Messages" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-gen-sc |
3.7. Managing Login Permissions for Domain Users | 3.7. Managing Login Permissions for Domain Users By default, domain-side access control is applied, which means that login policies for domain users are defined in the domain itself. This default behavior can be overridden so that client-side access control is used. With client-side access control, login permission are defined by local policies only. If a domain applies client-side access control, you can use the realmd system to configure basic allow or deny access rules for users from that domain. Note that these access rules either allow or deny access to all services on the system. More specific access rules must be set on a specific system resource or in the domain. To set the access rules, use the following two commands: realm deny The realm deny command simply denies access to all users within the domain. Use this command with the --all option. realm permit The realm permit command can be used to: grant access to all users by using the --all option, for example: grant access to specified users, for example: deny access to specified users by using the -x option, for example: Note that allowing access currently only works for users in primary domains, not for users in trusted domains. This is because while user logins must contain the domain name, SSSD currently cannot provide realmd with information about available child domains. Important It is safer to only allow access to specifically selected users or groups than to deny access to some, while enabling it to everyone else. Therefore, it is not recommended to allow access to all by default while only denying it to specified users with realm permit -x . Instead, Red Hat recommends to maintain a default no access policy for all users and only grant access to selected users using realm permit . For more information about the realm deny and realm permit commands, see the realm (8) man page. | [
"realm permit --all",
"realm permit [email protected] realm permit ' AD.EXAMPLE.COM\\user '",
"realm permit -x ' AD.EXAMPLE.COM\\user '"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/realmd-logins |
Chapter 3. The CodeReady Linux Builder repository | Chapter 3. The CodeReady Linux Builder repository The CodeReady Linux Builder repository contains additional packages for use by developers. This repository is available with all Red Hat Enterprise Linux subscriptions but does not need to be enabled on your runtime deployments. Packages included in the CodeReady Linux Builder repository are unsupported. For more information, see How to enable and make use of content within CodeReady Linux Builder . The following table lists all the packages in the CodeReady Linux Builder repository along with their license. For a list of available modules and streams, see Section 3.1, "Modules in the CodeReady Linux Builder repository" . Package License accel-config-devel LGPLv2+ accountsservice-devel GPLv3+ adwaita-icon-theme-devel LGPLv3+ or CC-BY-SA anaconda-widgets-devel GPLv2+ and MIT ant ASL 2.0 ant-antlr ASL 2.0 ant-apache-bcel ASL 2.0 ant-apache-bsf ASL 2.0 ant-apache-log4j ASL 2.0 ant-apache-oro ASL 2.0 ant-apache-regexp ASL 2.0 ant-apache-resolver ASL 2.0 ant-apache-xalan2 ASL 2.0 ant-commons-logging ASL 2.0 ant-commons-net ASL 2.0 ant-contrib ASL 2.0 and ASL 1.1 ant-contrib-javadoc ASL 2.0 and ASL 1.1 ant-javadoc ASL 2.0 ant-javamail ASL 2.0 ant-jdepend ASL 2.0 ant-jmf ASL 2.0 ant-jsch ASL 2.0 ant-junit ASL 2.0 ant-lib ASL 2.0 ant-manual ASL 2.0 ant-swing ASL 2.0 ant-testutil ASL 2.0 ant-xz ASL 2.0 antlr-C++ ANTLR-PD antlr-javadoc ANTLR-PD antlr-manual ANTLR-PD antlr-tool ANTLR-PD aopalliance Public Domain aopalliance-javadoc Public Domain apache-commons-beanutils ASL 2.0 apache-commons-beanutils-javadoc ASL 2.0 apache-commons-cli ASL 2.0 apache-commons-cli-javadoc ASL 2.0 apache-commons-codec ASL 2.0 apache-commons-codec-javadoc ASL 2.0 apache-commons-collections ASL 2.0 apache-commons-collections-javadoc ASL 2.0 apache-commons-collections-testframework ASL 2.0 apache-commons-compress ASL 2.0 apache-commons-compress-javadoc ASL 2.0 apache-commons-exec ASL 2.0 apache-commons-exec-javadoc ASL 2.0 apache-commons-io ASL 2.0 apache-commons-io-javadoc ASL 2.0 apache-commons-jxpath ASL 2.0 apache-commons-jxpath-javadoc ASL 2.0 apache-commons-lang ASL 2.0 apache-commons-lang-javadoc ASL 2.0 apache-commons-lang3 ASL 2.0 apache-commons-lang3-javadoc ASL 2.0 apache-commons-logging ASL 2.0 apache-commons-logging-javadoc ASL 2.0 apache-commons-net ASL 2.0 apache-commons-net-javadoc ASL 2.0 apache-commons-parent ASL 2.0 apache-ivy ASL 2.0 apache-ivy-javadoc ASL 2.0 apache-parent ASL 2.0 apache-resource-bundles ASL 2.0 aqute-bnd ASL 2.0 aqute-bnd-javadoc ASL 2.0 aqute-bndlib ASL 2.0 asciidoc-doc GPL+ and GPLv2+ asio-devel Boost aspell-devel LGPLv2+ and LGPLv2 and GPLv2+ and BSD assertj-core ASL 2.0 assertj-core-javadoc ASL 2.0 atinject ASL 2.0 atinject-javadoc ASL 2.0 atinject-tck ASL 2.0 atkmm-devel LGPLv2+ atkmm-doc LGPLv2+ augeas-devel LGPLv2+ autoconf-archive GPLv3+ with exceptions autoconf213 GPLv2+ autogen GPLv3+ autogen-libopts-devel LGPLv3+ autotrace GPLv2+ and LGPLv2+ avahi-compat-howl LGPLv2+ avahi-compat-howl-devel LGPLv2+ avahi-compat-libdns_sd LGPLv2+ avahi-compat-libdns_sd-devel LGPLv2+ avahi-devel LGPLv2+ avahi-glib-devel LGPLv2+ avahi-gobject-devel LGPLv2+ avahi-ui LGPLv2+ avahi-ui-devel LGPLv2+ babl-devel LGPLv3+ and GPLv3+ babl-devel-docs LGPLv3+ and GPLv3+ bash-devel GPLv3+ bcc-devel ASL 2.0 bcc-doc ASL 2.0 bcel ASL 2.0 bcel-javadoc ASL 2.0 beust-jcommander ASL 2.0 beust-jcommander-javadoc ASL 2.0 bind9.16-devel MPLv2.0 bind9.16-doc MPLv2.0 bind9.16-libs MPLv2.0 bison-devel GPLv3+ blas-devel BSD bluez-libs-devel GPLv2+ bnd-maven-plugin ASL 2.0 boost-build Boost and MIT and Python boost-doc Boost and MIT and Python boost-examples Boost and MIT and Python boost-graph-mpich Boost and MIT and Python boost-graph-openmpi Boost and MIT and Python boost-jam Boost and MIT and Python boost-mpich Boost and MIT and Python boost-mpich-devel Boost and MIT and Python boost-mpich-python3 Boost and MIT and Python boost-numpy3 Boost and MIT and Python boost-openmpi Boost and MIT and Python boost-openmpi-devel Boost and MIT and Python boost-openmpi-python3 Boost and MIT and Python boost-python3 Boost and MIT and Python boost-python3-devel Boost and MIT and Python boost-static Boost and MIT and Python brasero-devel GPLv3+ brasero-libs GPLv3+ brlapi-devel LGPLv2+ bsf ASL 2.0 bsf-javadoc ASL 2.0 bsh ASL 2.0 and BSD and Public Domain bsh-javadoc ASL 2.0 and BSD and Public Domain bsh-manual ASL 2.0 and BSD and Public Domain byaccj Public Domain cairomm-devel LGPLv2+ cairomm-doc LGPLv2+ cal10n MIT cal10n-javadoc MIT cdi-api ASL 2.0 cdi-api-javadoc ASL 2.0 cdparanoia-devel LGPLv2 celt051-devel BSD cglib ASL 2.0 and BSD cglib-javadoc ASL 2.0 and BSD cifs-utils-devel GPLv3 clucene-core-devel LGPLv2+ or ASL 2.0 clutter-devel LGPLv2+ clutter-doc LGPLv2+ clutter-gst3-devel LGPLv2+ clutter-gtk-devel LGPLv2+ codemodel CDDL-1.1 or GPLv2 with exceptions cogl-devel LGPLv2+ cogl-doc LGPLv2+ colord-devel GPLv2+ and LGPLv2+ colord-devel-docs GPLv2+ and LGPLv2+ colord-gtk-devel LGPLv2+ compat-guile18 LGPLv2+ compat-guile18-devel LGPLv2+ corosync-vqsim BSD cppcheck GPLv3+ cppunit LGPLv2+ cppunit-devel LGPLv2+ cppunit-doc LGPLv2+ cracklib-devel LGPLv2+ crash-devel GPLv3 ctags-etags GPLv2+ and LGPLv2+ and Public Domain CUnit-devel LGPLv2+ cups-filters-devel LGPLv2 and MIT daxctl-devel LGPLv2 dblatex GPLv2+ and GPLv2 and LPPL and DMIT and Public Domain dbus-c++ LGPLv2+ dbus-c++-devel LGPLv2+ dbus-c++-glib LGPLv2+ dconf-devel LGPLv2+ and GPLv2+ and GPLv3+ dejagnu GPLv3+ devhelp GPLv2+ and LGPL2+ devhelp-devel GPLv2+ and LGPL2+ device-mapper-devel LGPLv2 device-mapper-event-devel LGPLv2 device-mapper-multipath-devel GPLv2 docbook-style-dsssl DMIT docbook-utils GPLv2+ docbook2X MIT docbook5-schemas Freely redistributable without restriction dotconf-devel LGPLv2 dotnet-build-reference-packages MIT dotnet-sdk-3.1-source-built-artifacts MIT and ASL 2.0 and BSD dotnet-sdk-5.0-source-built-artifacts MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-sdk-6.0-source-built-artifacts MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-sdk-7.0-source-built-artifacts MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-sdk-8.0-source-built-artifacts 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet5.0-build-reference-packages MIT dovecot MIT and LGPLv2 dovecot-devel MIT and LGPLv2 doxygen GPL+ doxygen-doxywizard GPL+ doxygen-latex GPL+ dpdk-devel BSD and LGPLv2 and GPLv2 drpm-devel LGPLv2+ and BSD dtc GPLv2+ dwarves GPLv2 dyninst-devel LGPLv2+ dyninst-doc LGPLv2+ dyninst-static LGPLv2+ dyninst-testsuite LGPLv2+ easymock ASL 2.0 easymock-javadoc ASL 2.0 efivar-devel LGPL-2.1 eglexternalplatform-devel MIT eigen3-devel MPLv2.0 and LGPLv2+ and BSD elfutils-devel-static GPLv2+ or LGPLv3+ elfutils-libelf-devel-static GPLv2+ or LGPLv3+ elinks GPLv2 enca GPLv2 enca-devel GPLv2 enchant-devel LGPLv2+ enchant2-devel LGPLv2+ evince-devel GPLv2+ and GPLv3+ and LGPLv2+ and MIT and Afmparse evolution-data-server-doc LGPLv2+ evolution-data-server-perl LGPLv2+ evolution-data-server-tests LGPLv2+ evolution-devel GPLv2+ and GFDL exec-maven-plugin ASL 2.0 exec-maven-plugin-javadoc ASL 2.0 execstack GPLv2+ exempi-devel BSD exiv2-devel GPLv2+ exiv2-doc GPLv2+ felix-osgi-compendium ASL 2.0 felix-osgi-compendium-javadoc ASL 2.0 felix-osgi-core ASL 2.0 felix-osgi-core-javadoc ASL 2.0 felix-osgi-foundation ASL 2.0 felix-osgi-foundation-javadoc ASL 2.0 felix-parent ASL 2.0 felix-utils ASL 2.0 felix-utils-javadoc ASL 2.0 fftw-doc GPLv2+ file-devel BSD fipscheck-devel BSD flac BSD and GPLv2+ and GFDL flac-devel BSD and GPLv2+ and GFDL flatpak LGPLv2+ flatpak-devel LGPLv2+ flatpak-session-helper LGPLv2+ flex-devel BSD and LGPLv2+ flite MIT flite-devel MIT fltk-devel LGPLv2+ with exceptions fontconfig-devel-doc MIT and Public Domain and UCD fontforge GPLv3+ fontpackages-devel LGPLv3+ forge-parent ASL 2.0 freeipmi-devel GPLv3+ freerdp-devel ASL 2.0 frei0r-devel GPLv2+ frei0r-plugins GPLv2+ fstrm-utils MIT fuse-sshfs GPLv2 fusesource-pom ASL 2.0 fwupd-devel LGPLv2+ galera GPLv2 gamin-devel LGPLv2 gc-devel BSD gcc-plugin-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-gcc-plugin-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-dyninst LGPLv2+ gcc-toolset-9-dyninst-devel LGPLv2+ gcc-toolset-9-dyninst-doc LGPLv2+ gcc-toolset-9-dyninst-static LGPLv2+ gcc-toolset-9-dyninst-testsuite LGPLv2+ gcc-toolset-9-gcc-plugin-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD GConf2-devel LGPLv2+ and GPLv2+ gdk-pixbuf2-xlib LGPLv2+ gdk-pixbuf2-xlib-devel LGPLv2+ gdm-devel GPLv2+ gdm-pam-extensions-devel GPLv2+ gegl04-devel LGPLv3+ geoclue2-devel GPLv2+ geronimo-annotation ASL 2.0 geronimo-annotation-javadoc ASL 2.0 geronimo-jms ASL 2.0 geronimo-jms-javadoc ASL 2.0 geronimo-jpa ASL 2.0 geronimo-jpa-javadoc ASL 2.0 geronimo-parent-poms ASL 2.0 gflags BSD gflags-devel BSD ghostscript-doc AGPLv3+ ghostscript-tools-dvipdf AGPLv3+ ghostscript-tools-fonts AGPLv3+ ghostscript-tools-printing AGPLv3+ giflib-devel MIT gjs-devel MIT and (MPLv1.1 or GPLv2+ or LGPLv2+) glade GPLv2+ and LGPLv2+ glade-devel GPLv2+ and LGPLv2+ glassfish-annotation-api CDDL or GPLv2 with exceptions glassfish-annotation-api-javadoc CDDL or GPLv2 with exceptions glassfish-el CDDL-1.1 or GPLv2 with exceptions glassfish-el-api (CDDL or GPLv2 with exceptions) and ASL 2.0 glassfish-el-javadoc CDDL-1.1 or GPLv2 with exceptions glassfish-jsp-api (CDDL-1.1 or GPLv2 with exceptions) and ASL 2.0 glassfish-jsp-api-javadoc (CDDL-1.1 or GPLv2 with exceptions) and ASL 2.0 glassfish-legal CDDL or GPLv2 with exceptions glassfish-master-pom CDDL or GPLv2 with exceptions glassfish-servlet-api (CDDL or GPLv2 with exceptions) and ASL 2.0 glassfish-servlet-api-javadoc (CDDL or GPLv2 with exceptions) and ASL 2.0 glew-devel BSD and MIT glib2-doc LGPLv2+ glib2-static LGPLv2+ glibc-benchtests LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL glibc-nss-devel LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL glibc-static LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL glibmm24-devel LGPLv2+ glibmm24-doc LGPLv2+ glm-devel MIT glm-doc MIT glog BSD glog-devel BSD glusterfs-api-devel GPLv2 or LGPLv3+ glusterfs-devel GPLv2 or LGPLv3+ gmock BSD and ASL2.0 gmock-devel BSD and ASL2.0 gnome-bluetooth GPLv2+ gnome-bluetooth-libs-devel LGPLv2+ gnome-common GPLv2+ gnome-menus-devel LGPLv2+ gnome-software GPLv2+ gnome-software-devel GPLv2+ gnu-efi BSD gnu-efi-devel BSD gnuplot-doc gnuplot and MIT go-compilers-golang-compiler GPLv3+ google-guice ASL 2.0 google-guice-javadoc ASL 2.0 google-noto-sans-cjk-jp-fonts OFL google-roboto-slab-fonts ASL 2.0 gperf GPLv3+ gpgme-devel LGPLv2+ and GPLv3+ gpgmepp-devel LGPLv2+ and GPLv3+ graphviz-devel EPL-1.0 graphviz-doc EPL-1.0 graphviz-gd EPL-1.0 graphviz-python3 EPL-1.0 grilo-devel LGPLv2+ groff GPLv3+ and GFDL and BSD and MIT gsm-devel MIT gspell-devel LGPLv2+ gspell-doc LGPLv2+ gssdp-devel LGPLv2+ gssdp-docs LGPLv2+ gstreamer1-plugins-bad-free-devel LGPLv2+ and LGPLv2 gtest BSD and ASL2.0 gtest-devel BSD and ASL2.0 gtk-doc GPLv2+ and GFDL gtk-vnc2-devel LGPLv2+ gtk3-devel-docs LGPLv2+ gtkmm24-devel LGPLv2+ gtkmm24-docs LGPLv2+ gtkmm30-devel LGPLv2+ gtkmm30-doc LGPLv2+ gtksourceview3-devel LGPLv2+ gtkspell GPLv2+ gtkspell-devel GPLv2+ gtkspell3-devel GPLv2+ guava20 ASL 2.0 and CC0 guava20-javadoc ASL 2.0 and CC0 guava20-testlib ASL 2.0 and CC0 guice-assistedinject ASL 2.0 guice-bom ASL 2.0 guice-extensions ASL 2.0 guice-grapher ASL 2.0 guice-jmx ASL 2.0 guice-jndi ASL 2.0 guice-multibindings ASL 2.0 guice-parent ASL 2.0 guice-servlet ASL 2.0 guice-testlib ASL 2.0 guice-throwingproviders ASL 2.0 guile-devel LGPLv3+ gupnp-devel LGPLv2+ gupnp-igd-devel LGPLv2+ gvfs GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvnc-devel LGPLv2+ hamcrest BSD hamcrest-core BSD hamcrest-demo BSD hamcrest-javadoc BSD hawtjni ASL 2.0 and EPL-1.0 and BSD hawtjni-javadoc ASL 2.0 and EPL-1.0 and BSD hawtjni-runtime ASL 2.0 and EPL-1.0 and BSD help2man GPLv3+ hesiod-devel MIT hivex LGPLv2 hivex-devel LGPLv2 http-parser-devel MIT httpcomponents-client ASL 2.0 httpcomponents-client-cache ASL 2.0 httpcomponents-client-javadoc ASL 2.0 httpcomponents-core ASL 2.0 httpcomponents-core-javadoc ASL 2.0 httpcomponents-project ASL 2.0 hwloc-devel BSD hyphen-devel GPLv2 or LGPLv2+ or MPLv1.1 ibus-devel LGPLv2+ ibus-devel-docs LGPLv2+ ibus-table-devel LGPLv2+ ibus-table-tests LGPLv2+ ibus-typing-booster-tests GPLv3+ ilmbase-devel BSD ima-evm-utils-devel GPLv2 imake MIT intel-cmt-cat-devel BSD iproute-devel GPL-2.0-or-later ipset-devel GPLv2 irssi-devel GPLv2+ iscsi-initiator-utils-devel GPLv2+ isl-devel MIT isorelax MIT and ASL 1.1 isorelax-javadoc MIT and ASL 1.1 istack-commons CDDL-1.1 and GPLv2 with exceptions ivy-local BSD jakarta-commons-httpclient ASL 2.0 and (ASL 2.0 or LGPLv2+) jakarta-commons-httpclient-demo ASL 2.0 and (ASL 2.0 or LGPLv2+) jakarta-commons-httpclient-javadoc ASL 2.0 and (ASL 2.0 or LGPLv2+) jakarta-commons-httpclient-manual ASL 2.0 and (ASL 2.0 or LGPLv2+) jakarta-oro ASL 1.1 jakarta-oro-javadoc ASL 1.1 jansi ASL 2.0 jansi-javadoc ASL 2.0 jansi-native ASL 2.0 jansi-native-javadoc ASL 2.0 jasper-devel JasPer java-1.8.0-openjdk-accessibility-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-accessibility-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-demo-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-demo-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-devel-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-devel-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-headless-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-headless-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-src-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-src-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-11-openjdk-demo-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-demo-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-devel-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-devel-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-headless-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-headless-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-jmods-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-jmods-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-src-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-src-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-static-libs-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-static-libs-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-demo-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-demo-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-devel-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-devel-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-headless-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-headless-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-jmods-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-jmods-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-src-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-src-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-static-libs-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-static-libs-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-demo-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-demo-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-devel-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-devel-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-headless-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-headless-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-jmods-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-jmods-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-src-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-src-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-static-libs-fastdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-static-libs-slowdebug ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java_cup MIT java_cup-javadoc MIT java_cup-manual MIT javacc BSD javacc-demo BSD javacc-javadoc BSD javacc-manual BSD javacc-maven-plugin ASL 2.0 javacc-maven-plugin-javadoc ASL 2.0 javamail CDDL-1.0 or GPLv2 with exceptions javamail-javadoc CDDL-1.0 or GPLv2 with exceptions javapackages-filesystem BSD javapackages-local BSD javapackages-tools BSD javassist MPLv1.1 or LGPLv2+ or ASL 2.0 javassist-javadoc MPLv1.1 or LGPLv2+ or ASL 2.0 jaxen BSD and W3C jaxen-demo BSD and W3C jaxen-javadoc BSD and W3C jbigkit-devel GPLv2+ jboss-interceptors-1.2-api CDDL or GPLv2 with exceptions jboss-interceptors-1.2-api-javadoc CDDL or GPLv2 with exceptions jboss-parent CC0 jcl-over-slf4j MIT and ASL 2.0 jdepend BSD jdepend-demo BSD jdepend-javadoc BSD jdependency ASL 2.0 jdependency-javadoc ASL 2.0 jdom Saxpath jdom-demo Saxpath jdom-javadoc Saxpath jdom2 Saxpath jdom2-javadoc Saxpath jflex BSD jflex-javadoc BSD jimtcl-devel BSD jline BSD jline-javadoc BSD jq-devel MIT and ASL 2.0 and CC-BY and GPLv3 js-uglify BSD jsch BSD jsch-javadoc BSD json-c-doc MIT jsoup MIT jsoup-javadoc MIT jsr-305 BSD and CC-BY jsr-305-javadoc BSD and CC-BY jtidy zlib jtidy-javadoc zlib Judy LGPLv2+ Judy-devel LGPLv2+ jul-to-slf4j MIT and ASL 2.0 junit EPL-1.0 junit-javadoc EPL-1.0 junit-manual EPL-1.0 jvnet-parent ASL 2.0 jzlib BSD jzlib-demo BSD jzlib-javadoc BSD kernel-tools-libs-devel GPLv2 keybinder3-devel MIT keybinder3-doc MIT kmod-devel GPLv2+ ladspa LGPLv2+ ladspa-devel LGPLv2+ lame-devel GPLv2+ lapack-devel BSD lapack-static BSD lasso-devel GPLv2+ latex2html GPLv2+ lcms2-devel MIT ldns-devel BSD ldns-doc BSD ldns-utils BSD lensfun LGPLv3 and CC-BY-SA lensfun-devel LGPLv3 leptonica-devel BSD and Leptonica libaec BSD libaec-devel BSD libao-devel GPLv2+ libappindicator-gtk3-devel LGPLv2 and LGPLv3 libappstream-glib-devel LGPLv2+ libarchive-devel BSD libassuan-devel LGPLv2+ and GPLv3+ libasyncns-devel LGPLv2+ libatasmart-devel LGPLv2+ libatomic_ops-devel GPLv2 and MIT libbabeltrace-devel MIT and GPLv2 libbasicobjects-devel GPLv3+ libblockdev-crypto-devel LGPLv2+ libblockdev-devel LGPLv2+ libblockdev-fs-devel LGPLv2+ libblockdev-loop-devel LGPLv2+ libblockdev-lvm-devel LGPLv2+ libblockdev-mdraid-devel LGPLv2+ libblockdev-part-devel LGPLv2+ libblockdev-swap-devel LGPLv2+ libblockdev-utils-devel LGPLv2+ libblockdev-vdo-devel LGPLv2+ libbpf-devel LGPLv2 or BSD libbpf-static LGPLv2 or BSD libburn-devel GPLv2+ libbytesize-devel LGPLv2+ libcdio-devel GPLv3+ libcdio-paranoia-devel GPLv3+ libcephfs-devel LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT libcephfs2 LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT libchamplain LGPLv2+ libchamplain-devel LGPLv2+ libchamplain-gtk LGPLv2+ libcmocka ASL 2.0 libcmocka-devel ASL 2.0 libcollection-devel LGPLv3+ libcomps-devel GPLv2+ libconfig-devel LGPLv2+ libcroco-devel LGPLv2 libcxl-devel ASL 2.0 libdaemon-devel LGPLv2+ libdap LGPLv2+ libdap-devel LGPLv2+ libdatrie-devel LGPLv2+ libdazzle GPLv3+ libdazzle-devel GPLv3+ libdb-cxx BSD and LGPLv2 and Sleepycat libdb-cxx-devel BSD and LGPLv2 and Sleepycat libdb-devel-doc BSD and LGPLv2 and Sleepycat libdb-sql BSD and LGPLv2 and Sleepycat libdb-sql-devel BSD and LGPLv2 and Sleepycat libdbusmenu-devel LGPLv3 or LGPLv2 and GPLv3 libdbusmenu-doc LGPLv3 or LGPLv2 and GPLv3 libdbusmenu-gtk3-devel LGPLv3 or LGPLv2 and GPLv3 libdhash-devel LGPLv3+ libdnet BSD libdnet-devel BSD libdnf-devel LGPLv2+ libdv LGPLv2+ libdv-devel LGPLv2+ libdvdread-devel GPLv2+ libdwarf LGPLv2 libdwarf-devel LGPLv2 libdwarf-static LGPLv2 libdwarf-tools GPLv2 libdwarves1 GPLv2 libecpg-devel PostgreSQL libedit-devel BSD libEMF LGPLv2+ and GPLv2+ libEMF-devel LGPLv2+ and GPLv2+ libeot MPLv2.0 libepubgen-devel MPLv2.0 libestr-devel LGPLv2+ libetonyek-devel MPLv2.0 libevdev-devel MIT libexif-devel LGPLv2+ libfabric-devel BSD or GPLv2 libfdt-devel GPLv2+ libfontenc-devel MIT libgcab1-devel LGPLv2+ libgee-devel LGPLv2+ libgexiv2-devel GPLv2+ libgit2-devel GPLv2 with exceptions libgit2-glib-devel LGPLv2+ libGLEW BSD and MIT libgnomekbd-devel LGPLv2+ libgphoto2-devel GPLv2+ and GPLv2 libgpod LGPLv2+ libgpod-devel LGPLv2+ libgpod-doc GFDL libgs-devel AGPLv3+ libgsf-devel LGPLv2 libgtop2-devel GPLv2+ libgudev-devel LGPLv2+ libguestfs-winsupport GPLv2+ libgusb-devel LGPLv2+ libgxps-devel LGPLv2+ libhbaapi-devel SNIA libIDL LGPLv2+ libIDL-devel LGPLv2+ libidn-devel LGPLv2+ and GPLv3+ and GFDL libiec61883-devel LGPLv2+ libimobiledevice LGPLv2+ libimobiledevice-devel LGPLv2+ libindicator-gtk3-devel GPLv3 libini_config-devel LGPLv3+ libinput-devel MIT libiscsi LGPLv2+ libiscsi-devel LGPLv2+ libiscsi-utils GPLv2+ libisoburn-devel GPLv2+ libisofs-devel GPLv2+ and LGPLv2+ libknet1 LGPLv2+ libknet1-devel LGPLv2+ libksba-devel (LGPLv3+ or GPLv2+) and GPLv3+ liblangtag-devel LGPLv3+ or MPLv2.0 liblangtag-doc LGPLv3+ or MPLv2.0 liblangtag-gobject LGPLv3+ or MPLv2.0 liblockfile-devel GPLv2+ and LGPLv2+ libmad GPLv2+ libmad-devel GPLv2+ libmemcached BSD libmemcached-devel BSD libmicrohttpd-devel LGPLv2+ libmicrohttpd-doc LGPLv2+ libmnl-devel LGPLv2+ libmodulemd-devel MIT libmount-devel LGPLv2+ libmpcdec-devel BSD libmspack-devel LGPLv2 libmtp-devel LGPLv2+ libmusicbrainz5-devel LGPLv2 libnbd LGPLv2+ libnbd-devel LGPLv2+ and BSD libnet-devel BSD libnetapi-devel GPL-3.0-or-later AND LGPL-3.0-or-later libnetfilter_conntrack-devel GPLv2+ libnetfilter_queue-devel GPLv2 libnfnetlink-devel GPLv2+ libnfsidmap-devel MIT and GPLv2 and GPLv2+ and BSD libnftnl-devel GPLv2+ libnghttp2-devel MIT libnice-devel LGPLv2 and MPLv1.1 libnma-devel GPLv2+ and LGPLv2+ libnsl2-devel BSD and LGPLv2+ libocxl-devel ASL 2.0 libodfgen-devel LGPLv2+ or MPLv2.0 libogg-devel-docs BSD liboggz BSD libopenraw-devel LGPLv3+ libopenraw-gnome LGPLv3+ libopenraw-gnome-devel LGPLv3+ libpaper-devel GPLv2 libpath_utils-devel LGPLv3+ libpcap-devel BSD with advertising libpciaccess-devel MIT libpeas-devel LGPLv2+ libpfm-static MIT libpinyin-devel GPLv3+ libplist LGPLv2+ libplist-devel LGPLv2+ libpmem-debug BSD libpmemblk-debug BSD libpmemlog-debug BSD libpmemobj-debug BSD libpmempool-debug BSD libproxy-devel LGPLv2+ libpsl-devel MIT libpsm2-devel BSD or GPLv2 libpurple-devel BSD and GPLv2+ and GPLv2 and LGPLv2+ and MIT libpwquality-devel BSD or GPLv2+ libqhull Qhull libqhull_p Qhull libqhull_r Qhull libquvi-devel AGPLv3+ librabbitmq-devel MIT librados-devel LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT libradosstriper-devel LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT libradosstriper1 LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT LibRaw-devel BSD and (CDDL or LGPLv2) libraw1394-devel LGPLv2+ librbd-devel LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT librdkafka-devel BSD libref_array-devel LGPLv3+ libreoffice-sdk (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-sdk-doc (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 librepo-devel LGPLv2+ librevenge-devel (LGPLv2+ or MPLv2.0) and BSD librhsm-devel LGPLv2+ librpmem-debug BSD librx GPLv2+ librx-devel GPLv2+ libsamplerate-devel BSD libsass MIT libsass-devel MIT libselinux-static Public Domain libsemanage-devel LGPLv2+ libsepol-static LGPLv2+ libserf ASL 2.0 libserf-devel ASL 2.0 libshout-devel LGPLv2+ libsigc++20-devel LGPLv2+ libsigc++20-doc LGPLv2+ libsigsegv-devel GPLv2+ libsmbclient-devel GPL-3.0-or-later AND LGPL-3.0-or-later libsmi-devel GPLv2+ and BSD libsndfile-devel LGPLv2+ and GPLv2+ and BSD libsolv-devel BSD libsolv-tools BSD libspectre-devel GPLv2+ libsrtp-devel BSD libss-devel MIT libsss_nss_idmap-devel LGPLv3+ libstdc++-static GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libstemmer-devel BSD libstoragemgmt-devel LGPLv2+ libsysfs-devel LGPLv2+ libthai-devel LGPLv2+ libtheora-devel BSD libtiff-tools libtiff libtimezonemap-devel GPLv3 libtraceevent-devel LGPLv2+ and GPLv2+ libtracefs-devel LGPLv2+ and GPLv2+ libucil-devel GPLv2+ libudisks2-devel LGPLv2+ libunicap-devel GPLv2+ libuninameslist BSD libunistring-devel GPLv2+ or LGPLv3+ liburing-devel LGPLv2+ libusb-devel LGPLv2+ libusbmuxd LGPLv2+ libusbmuxd-devel LGPLv2+ libuser-devel LGPLv2+ libutempter-devel LGPLv2+ libuv-devel MIT and BSD and ISC libv4l-devel LGPLv2+ libvarlink-devel ASL 2.0 libvdpau-devel MIT libvirt LGPLv2+ libvirt-client LGPLv2+ libvirt-daemon LGPLv2+ libvirt-daemon-config-network LGPLv2+ libvirt-daemon-config-nwfilter LGPLv2+ libvirt-daemon-driver-interface LGPLv2+ libvirt-daemon-driver-network LGPLv2+ libvirt-daemon-driver-nodedev LGPLv2+ libvirt-daemon-driver-nwfilter LGPLv2+ libvirt-daemon-driver-secret LGPLv2+ libvirt-daemon-driver-storage LGPLv2+ libvirt-daemon-driver-storage-core LGPLv2+ libvirt-daemon-driver-storage-disk LGPLv2+ libvirt-daemon-driver-storage-iscsi LGPLv2+ libvirt-daemon-driver-storage-iscsi-direct LGPLv2+ libvirt-daemon-driver-storage-logical LGPLv2+ libvirt-daemon-driver-storage-mpath LGPLv2+ libvirt-daemon-driver-storage-scsi LGPLv2+ libvirt-dbus LGPLv2+ libvirt-devel LGPLv2+ libvirt-docs LGPLv2+ libvirt-glib LGPLv2+ libvirt-libs LGPLv2+ libvirt-nss LGPLv2+ libvirt-wireshark LGPLv2+ libvisio-devel MPLv2.0 libvisual-devel LGPLv2+ libvmem-debug BSD libvmmalloc-debug BSD libvncserver-devel GPLv2+ libvoikko-devel GPLv2+ libvorbis-devel BSD libvorbis-devel-docs BSD libvpd-devel LGPLv2+ libvpx-devel BSD libwacom-devel MIT libwbclient-devel GPL-3.0-or-later AND LGPL-3.0-or-later libwmf-devel LGPLv2+ and GPLv2+ and GPL+ libwnck3-devel LGPLv2+ libwpd-devel LGPLv2+ or MPLv2.0 libwpd-doc LGPLv2+ or MPLv2.0 libwpe-devel BSD libwpg-devel LGPLv2+ or MPLv2.0 libwpg-doc LGPLv2+ or MPLv2.0 libwps-devel LGPLv2+ or MPLv2.0 libwps-doc LGPLv2+ or MPLv2.0 libwsman-devel BSD libxcrypt-static LGPLv2+ and BSD and Public Domain libXdmcp-devel MIT libxdp GPLv2 libxdp-devel GPLv2 libxdp-static GPLv2 libXfont2-devel MIT libxkbcommon-x11-devel MIT libxkbfile-devel MIT libxklavier-devel LGPLv2+ libxmlb-devel LGPLv2+ libXNVCtrl-devel GPLv2+ libXres-devel MIT libXvMC-devel MIT libyaml-devel MIT libzdnn-devel ASL 2.0 libzdnn-static ASL 2.0 libzpc-devel MIT linuxdoc-tools MIT lmdb OpenLDAP lmdb-devel OpenLDAP lockdev-devel LGPLv2 log4j-over-slf4j MIT and ASL 2.0 log4j12 ASL 2.0 log4j12-javadoc ASL 2.0 lpsolve LGPLv2+ lpsolve-devel LGPLv2+ lttng-ust-devel LGPLv2 and GPLv2 and MIT lua MIT lua-devel MIT lua-filesystem MIT lua-lunit MIT lua-posix MIT lvm2-devel LGPLv2 lynx GPLv2 mariadb GPLv2 with exceptions and LGPLv2 and BSD mariadb-backup GPLv2 with exceptions and LGPLv2 and BSD mariadb-common GPLv2 with exceptions and LGPLv2 and BSD mariadb-devel GPLv2 with exceptions and LGPLv2 and BSD mariadb-embedded GPLv2 with exceptions and LGPLv2 and BSD mariadb-embedded-devel GPLv2 with exceptions and LGPLv2 and BSD mariadb-errmsg GPLv2 with exceptions and LGPLv2 and BSD mariadb-gssapi-server GPLv2 with exceptions and LGPLv2 and BSD mariadb-oqgraph-engine GPLv2 with exceptions and LGPLv2 and BSD mariadb-server GPLv2 with exceptions and LGPLv2 and BSD mariadb-server-galera GPLv2 with exceptions and LGPLv2 and BSD mariadb-server-utils GPLv2 with exceptions and LGPLv2 and BSD mariadb-test GPLv2 with exceptions and LGPLv2 and BSD marisa-devel BSD or LGPLv2+ maven ASL 2.0 and MIT maven-antrun-plugin ASL 2.0 maven-antrun-plugin-javadoc ASL 2.0 maven-archiver ASL 2.0 maven-archiver-javadoc ASL 2.0 maven-artifact ASL 2.0 maven-artifact-manager ASL 2.0 maven-artifact-resolver ASL 2.0 maven-artifact-resolver-javadoc ASL 2.0 maven-artifact-transfer ASL 2.0 maven-artifact-transfer-javadoc ASL 2.0 maven-assembly-plugin ASL 2.0 maven-assembly-plugin-javadoc ASL 2.0 maven-cal10n-plugin MIT maven-clean-plugin ASL 2.0 maven-clean-plugin-javadoc ASL 2.0 maven-common-artifact-filters ASL 2.0 maven-common-artifact-filters-javadoc ASL 2.0 maven-compiler-plugin ASL 2.0 maven-compiler-plugin-javadoc ASL 2.0 maven-dependency-analyzer ASL 2.0 maven-dependency-analyzer-javadoc ASL 2.0 maven-dependency-plugin ASL 2.0 maven-dependency-plugin-javadoc ASL 2.0 maven-dependency-tree ASL 2.0 maven-dependency-tree-javadoc ASL 2.0 maven-doxia ASL 2.0 maven-doxia-core ASL 2.0 maven-doxia-javadoc ASL 2.0 maven-doxia-logging-api ASL 2.0 maven-doxia-module-apt ASL 2.0 maven-doxia-module-confluence ASL 2.0 maven-doxia-module-docbook-simple ASL 2.0 maven-doxia-module-fml ASL 2.0 maven-doxia-module-latex ASL 2.0 maven-doxia-module-rtf ASL 2.0 maven-doxia-module-twiki ASL 2.0 maven-doxia-module-xdoc ASL 2.0 maven-doxia-module-xhtml ASL 2.0 maven-doxia-modules ASL 2.0 maven-doxia-sink-api ASL 2.0 maven-doxia-sitetools ASL 2.0 maven-doxia-sitetools-javadoc ASL 2.0 maven-doxia-test-docs ASL 2.0 maven-doxia-tests ASL 2.0 maven-enforcer ASL 2.0 maven-enforcer-api ASL 2.0 maven-enforcer-javadoc ASL 2.0 maven-enforcer-plugin ASL 2.0 maven-enforcer-rules ASL 2.0 maven-failsafe-plugin ASL 2.0 and CPL maven-file-management ASL 2.0 maven-file-management-javadoc ASL 2.0 maven-filtering ASL 2.0 maven-filtering-javadoc ASL 2.0 maven-hawtjni-plugin ASL 2.0 and EPL-1.0 and BSD maven-install-plugin ASL 2.0 maven-install-plugin-javadoc ASL 2.0 maven-invoker ASL 2.0 maven-invoker-javadoc ASL 2.0 maven-invoker-plugin ASL 2.0 maven-invoker-plugin-javadoc ASL 2.0 maven-jar-plugin ASL 2.0 maven-jar-plugin-javadoc ASL 2.0 maven-javadoc ASL 2.0 and MIT maven-lib ASL 2.0 and MIT maven-local BSD maven-model ASL 2.0 maven-monitor ASL 2.0 maven-parent ASL 2.0 maven-plugin-annotations ASL 2.0 maven-plugin-build-helper MIT maven-plugin-build-helper-javadoc MIT maven-plugin-bundle ASL 2.0 maven-plugin-bundle-javadoc ASL 2.0 maven-plugin-descriptor ASL 2.0 maven-plugin-plugin ASL 2.0 maven-plugin-registry ASL 2.0 maven-plugin-testing ASL 2.0 maven-plugin-testing-harness ASL 2.0 maven-plugin-testing-javadoc ASL 2.0 maven-plugin-testing-tools ASL 2.0 maven-plugin-tools ASL 2.0 maven-plugin-tools-annotations ASL 2.0 maven-plugin-tools-ant ASL 2.0 maven-plugin-tools-api ASL 2.0 maven-plugin-tools-beanshell ASL 2.0 maven-plugin-tools-generators ASL 2.0 maven-plugin-tools-java ASL 2.0 maven-plugin-tools-javadoc ASL 2.0 maven-plugin-tools-javadocs ASL 2.0 maven-plugin-tools-model ASL 2.0 maven-plugins-pom ASL 2.0 maven-profile ASL 2.0 maven-project ASL 2.0 maven-remote-resources-plugin ASL 2.0 maven-remote-resources-plugin-javadoc ASL 2.0 maven-reporting-api ASL 2.0 maven-reporting-api-javadoc ASL 2.0 maven-reporting-impl ASL 2.0 maven-reporting-impl-javadoc ASL 2.0 maven-resolver ASL 2.0 maven-resolver-api ASL 2.0 maven-resolver-connector-basic ASL 2.0 maven-resolver-impl ASL 2.0 maven-resolver-javadoc ASL 2.0 maven-resolver-spi ASL 2.0 maven-resolver-test-util ASL 2.0 maven-resolver-transport-classpath ASL 2.0 maven-resolver-transport-file ASL 2.0 maven-resolver-transport-http ASL 2.0 maven-resolver-transport-wagon ASL 2.0 maven-resolver-util ASL 2.0 maven-resources-plugin ASL 2.0 maven-resources-plugin-javadoc ASL 2.0 maven-script ASL 2.0 maven-script-ant ASL 2.0 maven-script-beanshell ASL 2.0 maven-script-interpreter ASL 2.0 maven-script-interpreter-javadoc ASL 2.0 maven-settings ASL 2.0 maven-shade-plugin ASL 2.0 maven-shade-plugin-javadoc ASL 2.0 maven-shared ASL 2.0 maven-shared-incremental ASL 2.0 maven-shared-incremental-javadoc ASL 2.0 maven-shared-io ASL 2.0 maven-shared-io-javadoc ASL 2.0 maven-shared-utils ASL 2.0 maven-shared-utils-javadoc ASL 2.0 maven-source-plugin ASL 2.0 maven-source-plugin-javadoc ASL 2.0 maven-surefire ASL 2.0 and CPL maven-surefire-javadoc ASL 2.0 and CPL maven-surefire-plugin ASL 2.0 and CPL maven-surefire-provider-junit ASL 2.0 and CPL maven-surefire-provider-testng ASL 2.0 and CPL maven-surefire-report-parser ASL 2.0 and CPL maven-surefire-report-plugin ASL 2.0 and CPL maven-test-tools ASL 2.0 maven-toolchain ASL 2.0 maven-verifier ASL 2.0 maven-verifier-javadoc ASL 2.0 maven-wagon ASL 2.0 maven-wagon-file ASL 2.0 maven-wagon-ftp ASL 2.0 maven-wagon-http ASL 2.0 maven-wagon-http-lightweight ASL 2.0 maven-wagon-http-shared ASL 2.0 maven-wagon-javadoc ASL 2.0 maven-wagon-provider-api ASL 2.0 maven-wagon-providers ASL 2.0 maven2-javadoc ASL 2.0 memkind-devel BSD mesa-libgbm-devel MIT mesa-libOSMesa-devel MIT meson ASL 2.0 metis ASL 2.0 and BSD and LGPLv2+ metis-devel ASL 2.0 and BSD and LGPLv2+ mingw-binutils-generic GPLv2+ and LGPLv2+ and GPLv3+ and LGPLv3+ mingw-filesystem-base GPLv2+ mingw32-binutils GPLv2+ and LGPLv2+ and GPLv3+ and LGPLv3+ mingw32-bzip2 BSD mingw32-bzip2-static BSD mingw32-cairo LGPLv2 or MPLv1.1 mingw32-cpp GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions mingw32-crt Public Domain and ZPLv2.1 mingw32-expat MIT mingw32-filesystem GPLv2+ mingw32-fontconfig MIT mingw32-freetype FTL or GPLv2+ mingw32-freetype-static FTL or GPLv2+ mingw32-gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions mingw32-gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions mingw32-gettext GPLv2+ and LGPLv2+ mingw32-gettext-static GPLv2+ and LGPLv2+ mingw32-glib2 LGPLv2+ mingw32-glib2-static LGPLv2+ mingw32-gstreamer1 LGPLv2+ mingw32-harfbuzz MIT mingw32-harfbuzz-static MIT mingw32-headers Public Domain and LGPLv2+ and ZPLv2.1 mingw32-icu MIT and UCD and Public Domain mingw32-libffi BSD mingw32-libjpeg-turbo wxWidgets mingw32-libjpeg-turbo-static wxWidgets mingw32-libpng zlib mingw32-libpng-static zlib mingw32-libtiff libtiff mingw32-libtiff-static libtiff mingw32-openssl OpenSSL mingw32-pcre BSD mingw32-pcre-static BSD mingw32-pixman MIT mingw32-pkg-config GPLv2+ mingw32-readline GPLv2+ mingw32-sqlite Public Domain mingw32-sqlite-static Public Domain mingw32-termcap GPLv2+ mingw32-win-iconv Public Domain mingw32-win-iconv-static Public Domain mingw32-winpthreads MIT and BSD mingw32-winpthreads-static MIT and BSD mingw32-zlib zlib mingw32-zlib-static zlib mingw64-binutils GPLv2+ and LGPLv2+ and GPLv3+ and LGPLv3+ mingw64-bzip2 BSD mingw64-bzip2-static BSD mingw64-cairo LGPLv2 or MPLv1.1 mingw64-cpp GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions mingw64-crt Public Domain and ZPLv2.1 mingw64-expat MIT mingw64-filesystem GPLv2+ mingw64-fontconfig MIT mingw64-freetype FTL or GPLv2+ mingw64-freetype-static FTL or GPLv2+ mingw64-gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions mingw64-gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions mingw64-gettext GPLv2+ and LGPLv2+ mingw64-gettext-static GPLv2+ and LGPLv2+ mingw64-glib2 LGPLv2+ mingw64-glib2-static LGPLv2+ mingw64-gstreamer1 LGPLv2+ mingw64-harfbuzz MIT mingw64-harfbuzz-static MIT mingw64-headers Public Domain and LGPLv2+ and ZPLv2.1 mingw64-icu MIT and UCD and Public Domain mingw64-libffi BSD mingw64-libjpeg-turbo wxWidgets mingw64-libjpeg-turbo-static wxWidgets mingw64-libpng zlib mingw64-libpng-static zlib mingw64-libtiff libtiff mingw64-libtiff-static libtiff mingw64-openssl OpenSSL mingw64-pcre BSD mingw64-pcre-static BSD mingw64-pixman MIT mingw64-pkg-config GPLv2+ mingw64-readline GPLv2+ mingw64-sqlite Public Domain mingw64-sqlite-static Public Domain mingw64-termcap GPLv2+ mingw64-win-iconv Public Domain mingw64-win-iconv-static Public Domain mingw64-winpthreads MIT and BSD mingw64-winpthreads-static MIT and BSD mingw64-zlib zlib mingw64-zlib-static zlib mobile-broadband-provider-info-devel Public Domain mockito MIT mockito-javadoc MIT mod_dav_svn ASL 2.0 modello ASL 2.0 and BSD and MIT modello-javadoc ASL 2.0 and BSD and MIT ModemManager GPLv2+ ModemManager-devel GPLv2+ ModemManager-glib-devel GPLv2+ mojo-parent ASL 2.0 mozjs52-devel MPLv2.0 and MPLv1.1 and BSD and GPLv2+ and GPLv3+ and LGPLv2.1 and LGPLv2.1+ and AFL and ASL 2.0 mozjs60-devel MPLv2.0 and MPLv1.1 and BSD and GPLv2+ and GPLv3+ and LGPLv2+ and AFL and ASL 2.0 mpdecimal++ BSD mpdecimal-devel BSD mpdecimal-doc FBSDDL and MIT mpg123-devel LGPLv2+ mtdev-devel MIT munge-devel GPLv3+ and LGPLv3+ munge-maven-plugin CDDL-1.0 munge-maven-plugin-javadoc CDDL-1.0 mutter-devel GPLv2+ mythes-devel BSD and MIT nasm BSD nautilus GPLv3+ nautilus-devel LGPLv2+ nbdfuse LGPLv2+ and BSD ndctl-devel LGPLv2 neon-devel LGPLv2+ and GPLv2+ netcf LGPLv2+ netcf-devel LGPLv2+ netcf-libs LGPLv2+ netpbm-devel BSD and GPLv2 and IJG and MIT and Public Domain netpbm-doc BSD and GPLv2 and IJG and MIT and Public Domain NetworkManager-libnm-devel LGPLv2+ nftables-devel GPLv2 nghttp2 MIT ninja-build ASL 2.0 nkf BSD nmstate-devel ASL 2.0 nss_hesiod LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL objectweb-asm BSD objectweb-asm-javadoc BSD objectweb-pom ASL 2.0 objenesis ASL 2.0 objenesis-javadoc ASL 2.0 ocaml QPL and (LGPLv2+ with exceptions) ocaml-camlp4 LGPLv2+ with exceptions ocaml-camlp4-devel LGPLv2+ with exceptions ocaml-compiler-libs QPL and (LGPLv2+ with exceptions) ocaml-cppo BSD ocaml-extlib LGPLv2+ with exceptions ocaml-extlib-devel LGPLv2+ with exceptions ocaml-findlib BSD ocaml-findlib-devel BSD ocaml-hivex LGPLv2 ocaml-hivex-devel LGPLv2 ocaml-labltk LGPLv2+ with exceptions ocaml-labltk-devel LGPLv2+ with exceptions ocaml-libguestfs LGPLv2+ ocaml-libguestfs-devel LGPLv2+ ocaml-libnbd LGPLv2+ ocaml-libnbd-devel LGPLv2+ ocaml-ocamlbuild LGPLv2+ with exceptions ocaml-ocamlbuild-devel LGPLv2+ with exceptions ocaml-ocamldoc QPL and (LGPLv2+ with exceptions) ocaml-runtime QPL and (LGPLv2+ with exceptions) ocl-icd-devel BSD oniguruma-devel BSD openal-soft-devel LGPLv2+ openblas-devel BSD openblas-openmp BSD openblas-openmp64 BSD openblas-openmp64_ BSD openblas-Rblas BSD openblas-serial64 BSD openblas-serial64_ BSD openblas-static BSD openblas-threads64 BSD openblas-threads64_ BSD opencl-filesystem Public Domain opencl-headers MIT opencryptoki-devel CPL opencsd-devel BSD opencv BSD opencv-devel BSD OpenEXR-devel BSD OpenIPMI-devel LGPLv2+ and GPLv2+ or BSD openjade DMIT openjpeg2-devel BSD and MIT openjpeg2-tools BSD and MIT openldap-servers OpenLDAP openscap-engine-sce-devel LGPLv2+ openslp-devel BSD opensm-devel GPLv2 or BSD opensp MIT opensp-devel MIT os-maven-plugin ASL 2.0 os-maven-plugin-javadoc ASL 2.0 osgi-annotation ASL 2.0 osgi-annotation-javadoc ASL 2.0 osgi-compendium ASL 2.0 osgi-compendium-javadoc ASL 2.0 osgi-core ASL 2.0 osgi-core-javadoc ASL 2.0 PackageKit GPLv2+ and LGPLv2+ PackageKit-glib-devel GPLv2+ and LGPLv2+ pam_wrapper GPLv3+ pandoc GPLv2+ pandoc-common GPLv2+ pangomm-devel LGPLv2+ pangomm-doc LGPLv2+ papi-testsuite BSD parted-devel GPLv3+ pcre-static BSD pcre2-tools BSD and GPLv3+ pcsc-lite-devel BSD perl-AnyEvent GPL+ or Artistic perl-B-Hooks-EndOfScope GPL+ or Artistic perl-Canary-Stability GPL+ or Artistic perl-Capture-Tiny ASL 2.0 perl-Class-Accessor GPL+ or Artistic perl-Class-Data-Inheritable GPL+ or Artistic perl-Class-Factory-Util GPL+ or Artistic perl-Class-Method-Modifiers GPL+ or Artistic perl-Class-Singleton GPL+ or Artistic perl-Class-Tiny ASL 2.0 perl-Class-XSAccessor GPL+ or Artistic perl-Clone GPL+ or Artistic perl-common-sense GPL+ or Artistic perl-Config-AutoConf GPL+ or Artistic perl-Data-UUID BSD and MIT perl-Date-ISO8601 GPL+ or Artistic perl-DateTime Artistic 2.0 perl-DateTime-Format-Builder Artistic 2.0 and (GPL+ or Artistic) perl-DateTime-Format-HTTP GPL+ or Artistic perl-DateTime-Format-ISO8601 GPL+ or Artistic perl-DateTime-Format-Mail GPL+ or Artistic perl-DateTime-Format-Strptime Artistic 2.0 perl-DateTime-Locale (GPL+ or Artistic) and Unicode perl-DateTime-TimeZone (GPL+ or Artistic) and Public Domain perl-DateTime-TimeZone-SystemV GPL+ or Artistic perl-DateTime-TimeZone-Tzfile GPL+ or Artistic perl-Devel-CallChecker GPL+ or Artistic perl-Devel-Caller GPL+ or Artistic perl-Devel-CheckLib GPL+ or Artistic perl-Devel-GlobalDestruction GPL+ or Artistic perl-Devel-LexAlias GPL+ or Artistic perl-Devel-StackTrace Artistic 2.0 perl-Devel-Symdump GPL+ or Artistic perl-Digest-CRC Public Domain perl-Digest-SHA1 GPL+ or Artistic perl-Dist-CheckConflicts GPL+ or Artistic perl-DynaLoader-Functions GPL+ or Artistic perl-Eval-Closure GPL+ or Artistic perl-Exception-Class GPL+ or Artistic perl-Exporter-Tiny GPL+ or Artistic perl-File-BaseDir GPL+ or Artistic perl-File-chdir GPL+ or Artistic perl-File-Copy-Recursive GPL+ or Artistic perl-File-DesktopEntry GPL+ or Artistic perl-File-Find-Object GPLv2+ or Artistic 2.0 perl-File-Find-Rule GPL+ or Artistic perl-File-MimeInfo GPL+ or Artistic perl-File-ReadBackwards GPL+ or Artistic perl-File-Remove GPL+ or Artistic perl-hivex LGPLv2 perl-HTML-Tree GPL+ or Artistic perl-HTTP-Daemon GPL+ or Artistic perl-Import-Into GPL+ or Artistic perl-Importer GPL+ or Artistic perl-IO-All GPL+ or Artistic perl-IO-stringy GPL+ or Artistic perl-IO-Tty (GPL+ or Artistic) and BSD perl-IPC-Run GPL+ or Artistic perl-IPC-Run3 GPL+ or Artistic or BSD perl-JSON-XS GPL+ or Artistic perl-ldns BSD perl-List-MoreUtils (GPL+ or Artistic) and ASL 2.0 perl-List-MoreUtils-XS (GPL+ or Artistic) and ASL 2.0 perl-Locale-gettext GPL+ or Artistic perl-MIME-Charset GPL+ or Artistic perl-MIME-Types GPL+ or Artistic perl-Module-Implementation Artistic 2.0 perl-Module-Install GPL+ or Artistic perl-Module-Install-AuthorTests GPL+ or Artistic perl-Module-Install-ReadmeFromPod GPL+ or Artistic perl-Module-ScanDeps GPL+ or Artistic perl-namespace-autoclean GPL+ or Artistic perl-namespace-clean GPL+ or Artistic perl-NKF BSD perl-Number-Compare GPL+ or Artistic perl-Package-DeprecationManager Artistic 2.0 perl-Package-Stash GPL+ or Artistic perl-Package-Stash-XS GPL+ or Artistic perl-PadWalker GPL+ or Artistic perl-Params-Classify GPL+ or Artistic perl-Params-Validate Artistic 2.0 and (GPL+ or Artistic) perl-Params-ValidationCompiler Artistic 2.0 perl-Path-Tiny ASL 2.0 perl-Perl-Destruct-Level GPL+ or Artistic perl-PerlIO-utf8_strict GPL+ or Artistic perl-Pod-Coverage GPL+ or Artistic perl-Pod-Markdown GPL+ or Artistic perl-prefork GPL+ or Artistic perl-Readonly GPL+ or Artistic perl-Ref-Util MIT perl-Ref-Util-XS MIT perl-Role-Tiny GPL+ or Artistic perl-Scope-Guard GPL+ or Artistic perl-SGMLSpm GPLv2+ perl-Specio Artistic 2.0 perl-Sub-Exporter-Progressive GPL+ or Artistic perl-Sub-Identify GPL+ or Artistic perl-Sub-Info GPL+ or Artistic perl-Sub-Name GPL+ or Artistic perl-Sub-Uplevel GPL+ or Artistic perl-SUPER GPL+ or Artistic perl-Switch GPL+ or Artistic perl-Sys-Virt GPLv2+ or Artistic perl-Taint-Runtime GPL+ or Artistic perl-Term-Size-Any GPL+ or Artistic perl-Term-Size-Perl GPL+ or Artistic perl-Term-Table GPL+ or Artistic perl-Test-Deep GPL+ or Artistic perl-Test-Differences GPL+ or Artistic perl-Test-Exception GPL+ or Artistic perl-Test-Fatal GPL+ or Artistic perl-Test-LongString GPL+ or Artistic perl-Test-NoWarnings LGPLv2+ perl-Test-Pod GPL+ or Artistic perl-Test-Pod-Coverage Artistic 2.0 perl-Test-Requires GPL+ or Artistic perl-Test-Taint GPL+ or Artistic perl-Test-Warn GPL+ or Artistic perl-Test-Warnings GPL+ or Artistic perl-Test2-Suite GPL+ or Artistic perl-Text-CharWidth GPL+ or Artistic perl-Text-WrapI18N GPL+ or Artistic perl-Tie-IxHash GPL+ or Artistic perl-Tk-devel (GPL+ or Artistic) and SWL perl-Types-Serialiser GPL+ or Artistic perl-Unicode-EastAsianWidth CC0 perl-Unicode-LineBreak GPL+ or Artistic perl-Unicode-UTF8 GPL+ or Artistic perl-Variable-Magic GPL+ or Artistic perl-XML-DOM GPL+ or Artistic perl-XML-RegExp GPL+ or Artistic perl-XML-Twig GPL+ or Artistic perl-YAML-LibYAML GPL+ or Artistic perl-YAML-Syck BSD and MIT perl-YAML-Tiny GPL+ or Artistic perltidy GPLv2+ pidgin-devel BSD and GPLv2+ and GPLv2 and LGPLv2+ and MIT plexus-ant-factory ASL 2.0 plexus-ant-factory-javadoc ASL 2.0 plexus-archiver ASL 2.0 plexus-archiver-javadoc ASL 2.0 plexus-bsh-factory MIT plexus-bsh-factory-javadoc MIT plexus-build-api ASL 2.0 plexus-build-api-javadoc ASL 2.0 plexus-cipher ASL 2.0 plexus-cipher-javadoc ASL 2.0 plexus-classworlds ASL 2.0 and Plexus plexus-classworlds-javadoc ASL 2.0 and Plexus plexus-cli ASL 2.0 plexus-cli-javadoc ASL 2.0 plexus-compiler MIT and ASL 2.0 plexus-compiler-extras MIT and ASL 2.0 and ASL 1.1 plexus-compiler-javadoc MIT and ASL 2.0 and ASL 1.1 plexus-compiler-pom MIT and ASL 2.0 plexus-component-api ASL 2.0 plexus-component-api-javadoc ASL 2.0 plexus-component-factories-pom ASL 2.0 plexus-components-pom ASL 2.0 plexus-containers ASL 2.0 and MIT and xpp plexus-containers-component-annotations ASL 2.0 and MIT and xpp plexus-containers-component-javadoc ASL 2.0 and MIT and xpp plexus-containers-component-metadata ASL 2.0 and MIT and xpp plexus-containers-container-default ASL 2.0 and MIT and xpp plexus-containers-javadoc ASL 2.0 and MIT and xpp plexus-i18n ASL 2.0 plexus-i18n-javadoc ASL 2.0 plexus-interactivity MIT plexus-interactivity-api MIT plexus-interactivity-javadoc MIT plexus-interactivity-jline MIT plexus-interpolation ASL 2.0 and ASL 1.1 and MIT plexus-interpolation-javadoc ASL 2.0 and ASL 1.1 and MIT plexus-io ASL 2.0 plexus-io-javadoc ASL 2.0 plexus-languages ASL 2.0 plexus-languages-javadoc ASL 2.0 plexus-pom ASL 2.0 plexus-resources MIT plexus-resources-javadoc MIT plexus-sec-dispatcher ASL 2.0 plexus-sec-dispatcher-javadoc ASL 2.0 plexus-utils ASL 1.1 and ASL 2.0 and xpp and BSD and Public Domain plexus-utils-javadoc ASL 1.1 and ASL 2.0 and xpp and BSD and Public Domain plexus-velocity ASL 2.0 plexus-velocity-javadoc ASL 2.0 plotutils GPLv2+ and GPLv3+ plotutils-devel GPLv2+ and GPLv3+ pmix-devel BSD po4a GPL+ poppler-cpp (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-cpp-devel (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-data-devel BSD and GPLv2 poppler-devel (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-glib-devel (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-glib-doc (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-qt5-devel (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT powermock-api-easymock ASL 2.0 powermock-api-mockito ASL 2.0 and MIT powermock-api-support ASL 2.0 powermock-common ASL 2.0 powermock-core ASL 2.0 powermock-javadoc ASL 2.0 powermock-junit4 ASL 2.0 powermock-reflect ASL 2.0 powermock-testng ASL 2.0 ppp BSD and LGPLv2+ and GPLv2+ and Public Domain ppp-devel BSD and LGPLv2+ and GPLv2+ and Public Domain pps-tools-devel GPLv2+ procps-ng-devel GPL+ and GPLv2 and GPLv2+ and GPLv3+ and LGPLv2+ protobuf-devel BSD protobuf-lite-devel BSD pstoedit GPLv2+ ptscotch-mpich CeCILL-C ptscotch-mpich-devel CeCILL-C ptscotch-mpich-devel-parmetis CeCILL-C ptscotch-openmpi CeCILL-C ptscotch-openmpi-devel CeCILL-C py3c-devel MIT py3c-doc CC-BY-SA pygobject3-devel LGPLv2+ and MIT python-cups-doc GPLv2+ python-ldb-devel-common LGPL-3.0-or-later python-sphinx-latex BSD and Public Domain and Python and (MIT or GPLv2) python-sphinx-locale BSD python2-iso8601 MIT python3-babeltrace MIT and GPLv2 python3-cairo-devel MPLv1.1 or LGPLv2 python3-Cython ASL 2.0 python3-greenlet MIT python3-greenlet-devel MIT python3-hivex LGPLv2 python3-httplib2 MIT python3-hypothesis MPLv2.0 python3-imagesize MIT python3-iso8601 MIT python3-javapackages BSD python3-ldb-devel LGPL-3.0-or-later python3-ldns BSD python3-lesscpy MIT python3-libnbd LGPLv2+ python3-libpfm MIT python3-libvirt LGPLv2+ python3-mock BSD python3-mpich MIT python3-openmpi BSD and MIT and Romio python3-packaging BSD or ASL 2.0 python3-pillow MIT python3-pillow-devel MIT python3-pillow-doc MIT python3-pillow-tk MIT python3-pyxattr LGPLv2+ python3-qt5-devel GPLv3 python3-rrdtool GPLv2+ with exceptions python3-samba-devel GPL-3.0-or-later AND LGPL-3.0-or-later python3-scons MIT python3-setuptools_scm MIT python3-sip-devel GPLv2 or GPLv3 and (GPLv3+ with exceptions) python3-snowballstemmer BSD python3-sphinx BSD and Public Domain and Python and (MIT or GPLv2) python3-sphinx-theme-alabaster BSD python3-sphinx_rtd_theme MIT python3-sphinxcontrib-websupport BSD python3-sure GPLv3+ python3-talloc-devel LGPL-3.0-or-later python3-unittest2 BSD python3-whoosh BSD python3.11 Python python3.11-attrs MIT python3.11-Cython ASL 2.0 python3.11-debug Python python3.11-idle Python python3.11-iniconfig MIT python3.11-packaging BSD or ASL 2.0 python3.11-pluggy MIT python3.11-psycopg2-debug LGPLv3+ with exceptions python3.11-psycopg2-tests LGPLv3+ with exceptions python3.11-pybind11 BSD python3.11-pybind11-devel BSD python3.11-pyparsing MIT python3.11-pytest MIT python3.11-semantic_version BSD python3.11-setuptools-rust MIT python3.11-test Python python3.11-tkinter Python python3.11-wheel-wheel MIT and (ASL 2.0 or BSD) python3.12 Python python3.12-Cython ASL 2.0 python3.12-debug Python python3.12-flit-core BSD python3.12-idle Python python3.12-iniconfig MIT python3.12-packaging BSD or ASL 2.0 python3.12-pluggy MIT python3.12-psycopg2-debug LGPLv3+ with exceptions python3.12-psycopg2-tests LGPLv3+ with exceptions python3.12-pybind11 BSD python3.12-pybind11-devel BSD python3.12-pytest MIT python3.12-scipy-tests BSD and MIT and Boost and Qhull and Public Domain python3.12-semantic_version BSD python3.12-setuptools-rust MIT python3.12-setuptools-wheel MIT and ASL 2.0 and (BSD or ASL 2.0) and Python python3.12-test Python python3.12-tkinter Python python3.12-wheel-wheel MIT and (ASL 2.0 or BSD) python38-atomicwrites MIT python38-attrs MIT python38-more-itertools MIT python38-packaging BSD or ASL 2.0 python38-pluggy MIT python38-py MIT and Public Domain python38-pyparsing MIT python38-pytest MIT python38-wcwidth MIT python39-attrs MIT python39-Cython ASL 2.0 python39-debug Python python39-iniconfig MIT python39-more-itertools MIT python39-packaging BSD or ASL 2.0 python39-pluggy MIT python39-py MIT and Public Domain python39-pybind11 BSD python39-pybind11-devel BSD python39-pyparsing MIT python39-pytest MIT python39-wcwidth MIT qatlib-devel BSD and (BSD or GPLv2) qatlib-tests BSD and (BSD or GPLv2) qatzip-devel BSD-3-Clause qdox ASL 2.0 qdox-javadoc ASL 2.0 qemu-kvm-tests GPLv2 and GPLv2+ and CC-BY qgpgme-devel LGPLv2+ and GPLv3+ qhull-devel Qhull qrencode-devel LGPLv2+ qt5-devel GPLv3 qt5-qtbase-static LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtdeclarative-static LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtquickcontrols2-devel GPLv2+ or LGPLv3 and GFDL qt5-qtserialbus-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qttools-static LGPLv3 or LGPLv2 qt5-qtwayland-devel LGPLv3 quota-devel GPLv2 raptor2-devel GPLv2+ or LGPLv2+ or ASL 2.0 rasqal-devel LGPLv2+ or ASL 2.0 re2c Public Domain recode-devel GPLv2+ redland-devel LGPLv2+ or ASL 2.0 regexp ASL 2.0 regexp-javadoc ASL 2.0 rpcgen BSD and LGPLv2+ rpcsvc-proto-devel BSD and LGPLv2+ rrdtool-devel GPLv2+ with exceptions rrdtool-doc GPLv2+ with exceptions rrdtool-lua GPLv2+ with exceptions rrdtool-ruby GPLv2+ with exceptions rrdtool-tcl GPLv2+ with exceptions ruby-hivex LGPLv2 rubygem-diff-lcs GPLv2+ or Artistic or MIT rubygem-rspec MIT rubygem-rspec-core MIT rubygem-rspec-expectations MIT rubygem-rspec-mocks MIT rubygem-rspec-support MIT s390utils-devel MIT samba-devel GPL-3.0-or-later AND LGPL-3.0-or-later sanlock-devel GPLv2 and GPLv2+ and LGPLv2+ sblim-cmpi-devel EPL sblim-gather-provider EPL sblim-sfcc-devel EPL-1.0 scotch CeCILL-C scotch-devel CeCILL-C SDL2 zlib and MIT SDL2-devel zlib and MIT SDL2-static zlib and MIT sendmail-milter-devel Sendmail sg3_utils-devel GPLv2+ and BSD sgabios ASL 2.0 shadow-utils-subid-devel BSD and GPLv2+ sharutils GPLv3+ and (GPLv3+ and BSD) and (LGPLv3+ or BSD) and LGPLv2+ and Public Domain and GFDL shim-unsigned-aarch64 BSD shim-unsigned-x64 BSD sip GPLv2 or GPLv3 and (GPLv3+ with exceptions) sisu-inject EPL-1.0 and BSD sisu-javadoc EPL-1.0 and BSD sisu-mojos EPL-1.0 sisu-mojos-javadoc EPL-1.0 sisu-plexus EPL-1.0 and BSD slf4j MIT and ASL 2.0 slf4j-ext MIT and ASL 2.0 slf4j-javadoc MIT and ASL 2.0 slf4j-jcl MIT and ASL 2.0 slf4j-jdk14 MIT and ASL 2.0 slf4j-log4j12 MIT and ASL 2.0 slf4j-manual MIT and ASL 2.0 slf4j-sources MIT and ASL 2.0 snappy-devel BSD socket_wrapper BSD sombok GPLv2+ or Artistic clarified sombok-devel GPLv2+ or Artistic clarified sonatype-oss-parent ASL 2.0 sonatype-plugins-parent ASL 2.0 soundtouch-devel LGPLv2+ sparsehash-devel BSD spec-version-maven-plugin CDDL or GPLv2 with exceptions spec-version-maven-plugin-javadoc CDDL or GPLv2 with exceptions speech-dispatcher-devel GPLv2+ speech-dispatcher-doc GPLv2+ speex-devel BSD speexdsp-devel BSD spice-parent ASL 2.0 spice-server-devel LGPLv2+ spirv-tools-devel ASL 2.0 spirv-tools-libs ASL 2.0 subversion ASL 2.0 subversion-devel ASL 2.0 subversion-gnome ASL 2.0 subversion-libs ASL 2.0 subversion-perl ASL 2.0 subversion-ruby ASL 2.0 subversion-tools ASL 2.0 suitesparse-devel (LGPLv2+ or BSD) and LGPLv2+ and GPLv2+ SuperLU BSD and GPLV2+ SuperLU-devel BSD and GPLV2+ taglib-devel LGPLv2 or MPLv1.1 tesseract-devel ASL 2.0 testng ASL 2.0 testng-javadoc ASL 2.0 texi2html GPLv2+ and OFSFDL and (CC-BY-SA or GPLv2) texinfo GPLv3+ texinfo-tex GPLv3+ texlive-lib-devel Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia tinycdb-devel Public Domain tinyxml2 zlib tinyxml2-devel zlib tix-devel TCL tog-pegasus-devel MIT tokyocabinet-devel LGPLv2+ torque OpenPBS and TORQUEv1.1 torque-devel OpenPBS and TORQUEv1.1 tpm-tools-devel CPL tpm2-abrmd-devel BSD tracker-devel GPLv2+ transfig MIT trousers-devel BSD tss2-devel BSD turbojpeg-devel IJG twolame-devel LGPLv2+ uglify-js BSD uid_wrapper GPLv3+ unicode-ucd-unihan MIT unifdef BSD upower-devel GPLv2+ upower-devel-docs GPLv2+ urw-base35-fonts-devel AGPLv3 userspace-rcu-devel LGPLv2+ ustr MIT or LGPLv2+ or BSD utf8proc Unicode and MIT utf8proc-devel Unicode and MIT uthash-devel BSD uuid-devel MIT vala LGPLv2+ and BSD vala-devel LGPLv2+ and BSD velocity ASL 2.0 velocity-demo ASL 2.0 velocity-javadoc ASL 2.0 velocity-manual ASL 2.0 vte291-devel LGPLv2+ wavpack-devel BSD web-assets-devel MIT web-assets-filesystem Public Domain webrtc-audio-processing-devel BSD and MIT weld-parent ASL 2.0 wireshark-devel GPL+ woff2-devel MIT wpebackend-fdo-devel BSD xalan-j2 ASL 2.0 and W3C xalan-j2-demo ASL 2.0 xalan-j2-javadoc ASL 2.0 xalan-j2-manual ASL 2.0 xalan-j2-xsltc ASL 2.0 xapian-core GPLv2+ xapian-core-devel GPLv2+ Xaw3d-devel MIT and GPLv3+ xbean ASL 2.0 xbean-javadoc ASL 2.0 xcb-proto MIT xcb-util-devel MIT xcb-util-image-devel MIT xcb-util-keysyms-devel MIT xcb-util-renderutil-devel MIT xcb-util-wm-devel MIT xerces-j2 ASL 2.0 and W3C xerces-j2-demo ASL 2.0 and W3C xerces-j2-javadoc ASL 2.0 and W3C xhtml1-dtds W3C xml-commons-apis ASL 2.0 and W3C and Public Domain xml-commons-apis-javadoc ASL 2.0 and W3C and Public Domain xml-commons-apis-manual ASL 2.0 and W3C and Public Domain xml-commons-resolver ASL 2.0 xml-commons-resolver-javadoc ASL 2.0 xmlrpc-c-c++ BSD and MIT xmlrpc-c-client++ BSD and MIT xmlrpc-c-devel BSD and MIT xmlsec1-devel MIT xmlsec1-gcrypt MIT xmlsec1-gnutls MIT xmlsec1-gnutls-devel MIT xmlsec1-openssl-devel MIT xmltoman GPLv2+ xmlunit BSD xmlunit-javadoc BSD xmvn ASL 2.0 xmvn-api ASL 2.0 xmvn-bisect ASL 2.0 xmvn-connector-aether ASL 2.0 xmvn-connector-ivy ASL 2.0 xmvn-core ASL 2.0 xmvn-install ASL 2.0 xmvn-javadoc ASL 2.0 xmvn-minimal ASL 2.0 xmvn-mojo ASL 2.0 xmvn-parent-pom ASL 2.0 xmvn-resolve ASL 2.0 xmvn-subst ASL 2.0 xmvn-tools-pom ASL 2.0 xorg-x11-apps MIT xorg-x11-drv-libinput-devel MIT xorg-x11-drv-wacom-devel GPLv2+ xorg-x11-server-devel MIT xorg-x11-server-source MIT xorg-x11-util-macros MIT xorg-x11-xkb-utils-devel MIT xorg-x11-xtrans-devel MIT xxhash-devel BSD-2-Clause xxhash-doc BSD-2-Clause xz-java Public Domain xz-java-javadoc Public Domain xz-lzma-compat Public Domain yajl-devel ISC yara-devel BSD-3-Clause yasm BSD and (GPLv2+ or Artistic or LGPLv2+) and LGPLv2 yelp-devel LGPLv2+ and ASL 2.0 and GPLv2+ zlib-static zlib and Boost zziplib-devel LGPLv2+ or MPLv1.1 3.1. Modules in the CodeReady Linux Builder repository The following table lists packages in the CodeReady Linux Builder repository by module and stream. Note that not all packages in this repository are distributed within a module. For all packages in the CodeReady Linux Builder repository, see Chapter 3, The CodeReady Linux Builder repository . Module Stream Packages javapackages-tools 201801 ant, ant-antlr, ant-apache-bcel, ant-apache-bsf, ant-apache-log4j, ant-apache-oro, ant-apache-regexp, ant-apache-resolver, ant-apache-xalan2, ant-commons-logging, ant-commons-net, ant-contrib, ant-contrib-javadoc, ant-javadoc, ant-javamail, ant-jdepend, ant-jmf, ant-jsch, ant-junit, ant-lib, ant-manual, ant-swing, ant-testutil, ant-xz, antlr, antlr-C++, antlr-javadoc, antlr-manual, antlr-tool, aopalliance, aopalliance-javadoc, apache-commons-beanutils, apache-commons-beanutils-javadoc, apache-commons-cli, apache-commons-cli-javadoc, apache-commons-codec, apache-commons-codec-javadoc, apache-commons-collections, apache-commons-collections-javadoc, apache-commons-collections-testframework, apache-commons-compress, apache-commons-compress-javadoc, apache-commons-exec, apache-commons-exec-javadoc, apache-commons-io, apache-commons-io-javadoc, apache-commons-jxpath, apache-commons-jxpath-javadoc, apache-commons-lang, apache-commons-lang-javadoc, apache-commons-lang3, apache-commons-lang3-javadoc, apache-commons-logging, apache-commons-logging-javadoc, apache-commons-net, apache-commons-net-javadoc, apache-commons-parent, apache-ivy, apache-ivy-javadoc, apache-parent, apache-resource-bundles, aqute-bnd, aqute-bnd-javadoc, aqute-bndlib, assertj-core, assertj-core-javadoc, atinject, atinject-javadoc, atinject-tck, bcel, bcel-javadoc, beust-jcommander, beust-jcommander-javadoc, bnd-maven-plugin, bsf, bsf-javadoc, bsh, bsh-javadoc, bsh-manual, byaccj, byaccj-debuginfo, byaccj-debugsource, cal10n, cal10n-javadoc, cdi-api, cdi-api-javadoc, cglib, cglib-javadoc, easymock, easymock-javadoc, exec-maven-plugin, exec-maven-plugin-javadoc, felix-osgi-compendium, felix-osgi-compendium-javadoc, felix-osgi-core, felix-osgi-core-javadoc, felix-osgi-foundation, felix-osgi-foundation-javadoc, felix-parent, felix-utils, felix-utils-javadoc, forge-parent, fusesource-pom, geronimo-annotation, geronimo-annotation-javadoc, geronimo-jms, geronimo-jms-javadoc, geronimo-jpa, geronimo-jpa-javadoc, geronimo-parent-poms, glassfish-annotation-api, glassfish-annotation-api-javadoc, glassfish-el, glassfish-el-api, glassfish-el-javadoc, glassfish-jsp-api, glassfish-jsp-api-javadoc, glassfish-legal, glassfish-master-pom, glassfish-servlet-api, glassfish-servlet-api-javadoc, google-guice, google-guice-javadoc, guava20, guava20-javadoc, guava20-testlib, guice-assistedinject, guice-bom, guice-extensions, guice-grapher, guice-jmx, guice-jndi, guice-multibindings, guice-parent, guice-servlet, guice-testlib, guice-throwingproviders, hamcrest, hamcrest-core, hamcrest-demo, hamcrest-javadoc, hawtjni, hawtjni-javadoc, hawtjni-runtime, httpcomponents-client, httpcomponents-client-cache, httpcomponents-client-javadoc, httpcomponents-core, httpcomponents-core-javadoc, httpcomponents-project, isorelax, isorelax-javadoc, ivy-local, jakarta-commons-httpclient, jakarta-commons-httpclient-demo, jakarta-commons-httpclient-javadoc, jakarta-commons-httpclient-manual, jakarta-oro, jakarta-oro-javadoc, jansi, jansi-javadoc, jansi-native, jansi-native-javadoc, java_cup, java_cup-javadoc, java_cup-manual, javacc, javacc-demo, javacc-javadoc, javacc-manual, javacc-maven-plugin, javacc-maven-plugin-javadoc, javamail, javamail-javadoc, javapackages-filesystem, javapackages-local, javapackages-tools, javassist, javassist-javadoc, jaxen, jaxen-demo, jaxen-javadoc, jboss-interceptors-1.2-api, jboss-interceptors-1.2-api-javadoc, jboss-parent, jcl-over-slf4j, jdepend, jdepend-demo, jdepend-javadoc, jdependency, jdependency-javadoc, jdom, jdom-demo, jdom-javadoc, jdom2, jdom2-javadoc, jflex, jflex-javadoc, jline, jline-javadoc, jsch, jsch-javadoc, jsoup, jsoup-javadoc, jsr-305, jsr-305-javadoc, jtidy, jtidy-javadoc, jul-to-slf4j, junit, junit-javadoc, junit-manual, jvnet-parent, jzlib, jzlib-demo, jzlib-javadoc, log4j-over-slf4j, log4j12, log4j12-javadoc, maven, maven-antrun-plugin, maven-antrun-plugin-javadoc, maven-archiver, maven-archiver-javadoc, maven-artifact, maven-artifact-manager, maven-artifact-resolver, maven-artifact-resolver-javadoc, maven-artifact-transfer, maven-artifact-transfer-javadoc, maven-assembly-plugin, maven-assembly-plugin-javadoc, maven-cal10n-plugin, maven-clean-plugin, maven-clean-plugin-javadoc, maven-common-artifact-filters, maven-common-artifact-filters-javadoc, maven-compiler-plugin, maven-compiler-plugin-javadoc, maven-dependency-analyzer, maven-dependency-analyzer-javadoc, maven-dependency-plugin, maven-dependency-plugin-javadoc, maven-dependency-tree, maven-dependency-tree-javadoc, maven-doxia, maven-doxia-core, maven-doxia-javadoc, maven-doxia-logging-api, maven-doxia-module-apt, maven-doxia-module-confluence, maven-doxia-module-docbook-simple, maven-doxia-module-fml, maven-doxia-module-latex, maven-doxia-module-rtf, maven-doxia-module-twiki, maven-doxia-module-xdoc, maven-doxia-module-xhtml, maven-doxia-modules, maven-doxia-sink-api, maven-doxia-sitetools, maven-doxia-sitetools-javadoc, maven-doxia-test-docs, maven-doxia-tests, maven-enforcer, maven-enforcer-api, maven-enforcer-javadoc, maven-enforcer-plugin, maven-enforcer-rules, maven-failsafe-plugin, maven-file-management, maven-file-management-javadoc, maven-filtering, maven-filtering-javadoc, maven-hawtjni-plugin, maven-install-plugin, maven-install-plugin-javadoc, maven-invoker, maven-invoker-javadoc, maven-invoker-plugin, maven-invoker-plugin-javadoc, maven-jar-plugin, maven-jar-plugin-javadoc, maven-javadoc, maven-lib, maven-local, maven-model, maven-monitor, maven-parent, maven-plugin-annotations, maven-plugin-build-helper, maven-plugin-build-helper-javadoc, maven-plugin-bundle, maven-plugin-bundle-javadoc, maven-plugin-descriptor, maven-plugin-plugin, maven-plugin-registry, maven-plugin-testing, maven-plugin-testing-harness, maven-plugin-testing-javadoc, maven-plugin-testing-tools, maven-plugin-tools, maven-plugin-tools-annotations, maven-plugin-tools-ant, maven-plugin-tools-api, maven-plugin-tools-beanshell, maven-plugin-tools-generators, maven-plugin-tools-java, maven-plugin-tools-javadoc, maven-plugin-tools-javadocs, maven-plugin-tools-model, maven-plugins-pom, maven-profile, maven-project, maven-remote-resources-plugin, maven-remote-resources-plugin-javadoc, maven-reporting-api, maven-reporting-api-javadoc, maven-reporting-impl, maven-reporting-impl-javadoc, maven-resolver, maven-resolver-api, maven-resolver-connector-basic, maven-resolver-impl, maven-resolver-javadoc, maven-resolver-spi, maven-resolver-test-util, maven-resolver-transport-classpath, maven-resolver-transport-file, maven-resolver-transport-http, maven-resolver-transport-wagon, maven-resolver-util, maven-resources-plugin, maven-resources-plugin-javadoc, maven-script, maven-script-ant, maven-script-beanshell, maven-script-interpreter, maven-script-interpreter-javadoc, maven-settings, maven-shade-plugin, maven-shade-plugin-javadoc, maven-shared, maven-shared-incremental, maven-shared-incremental-javadoc, maven-shared-io, maven-shared-io-javadoc, maven-shared-utils, maven-shared-utils-javadoc, maven-source-plugin, maven-source-plugin-javadoc, maven-surefire, maven-surefire-javadoc, maven-surefire-plugin, maven-surefire-provider-junit, maven-surefire-provider-testng, maven-surefire-report-parser, maven-surefire-report-plugin, maven-test-tools, maven-toolchain, maven-verifier, maven-verifier-javadoc, maven-wagon, maven-wagon-file, maven-wagon-ftp, maven-wagon-http, maven-wagon-http-lightweight, maven-wagon-http-shared, maven-wagon-javadoc, maven-wagon-provider-api, maven-wagon-providers, maven2, maven2-javadoc, mockito, mockito-javadoc, modello, modello-javadoc, mojo-parent, munge-maven-plugin, munge-maven-plugin-javadoc, objectweb-asm, objectweb-asm-javadoc, objectweb-pom, objenesis, objenesis-javadoc, os-maven-plugin, os-maven-plugin-javadoc, osgi-annotation, osgi-annotation-javadoc, osgi-compendium, osgi-compendium-javadoc, osgi-core, osgi-core-javadoc, plexus-ant-factory, plexus-ant-factory-javadoc, plexus-archiver, plexus-archiver-javadoc, plexus-bsh-factory, plexus-bsh-factory-javadoc, plexus-build-api, plexus-build-api-javadoc, plexus-cipher, plexus-cipher-javadoc, plexus-classworlds, plexus-classworlds-javadoc, plexus-cli, plexus-cli-javadoc, plexus-compiler, plexus-compiler-extras, plexus-compiler-javadoc, plexus-compiler-pom, plexus-component-api, plexus-component-api-javadoc, plexus-component-factories-pom, plexus-components-pom, plexus-containers, plexus-containers-component-annotations, plexus-containers-component-javadoc, plexus-containers-component-metadata, plexus-containers-container-default, plexus-containers-javadoc, plexus-i18n, plexus-i18n-javadoc, plexus-interactivity, plexus-interactivity-api, plexus-interactivity-javadoc, plexus-interactivity-jline, plexus-interpolation, plexus-interpolation-javadoc, plexus-io, plexus-io-javadoc, plexus-languages, plexus-languages-javadoc, plexus-pom, plexus-resources, plexus-resources-javadoc, plexus-sec-dispatcher, plexus-sec-dispatcher-javadoc, plexus-utils, plexus-utils-javadoc, plexus-velocity, plexus-velocity-javadoc, powermock, powermock-api-easymock, powermock-api-mockito, powermock-api-support, powermock-common, powermock-core, powermock-javadoc, powermock-junit4, powermock-reflect, powermock-testng, python3-javapackages, qdox, qdox-javadoc, regexp, regexp-javadoc, sisu, sisu-inject, sisu-javadoc, sisu-mojos, sisu-mojos-javadoc, sisu-plexus, slf4j, slf4j-ext, slf4j-javadoc, slf4j-jcl, slf4j-jdk14, slf4j-log4j12, slf4j-manual, slf4j-sources, sonatype-oss-parent, sonatype-plugins-parent, spec-version-maven-plugin, spec-version-maven-plugin-javadoc, spice-parent, testng, testng-javadoc, velocity, velocity-demo, velocity-javadoc, velocity-manual, weld-parent, xalan-j2, xalan-j2-demo, xalan-j2-javadoc, xalan-j2-manual, xalan-j2-xsltc, xbean, xbean-javadoc, xerces-j2, xerces-j2-demo, xerces-j2-javadoc, xml-commons-apis, xml-commons-apis-javadoc, xml-commons-apis-manual, xml-commons-resolver, xml-commons-resolver-javadoc, xmlunit, xmlunit-javadoc, xmvn, xmvn-api, xmvn-bisect, xmvn-connector-aether, xmvn-connector-ivy, xmvn-core, xmvn-install, xmvn-javadoc, xmvn-minimal, xmvn-mojo, xmvn-parent-pom, xmvn-resolve, xmvn-subst, xmvn-tools-pom, xz-java, xz-java-javadoc mariadb-devel 10.3 asio, asio-devel, galera, galera-debuginfo, galera-debugsource, Judy, Judy-debuginfo, Judy-debugsource, Judy-devel, mariadb, mariadb-backup, mariadb-backup-debuginfo, mariadb-common, mariadb-debuginfo, mariadb-debugsource, mariadb-devel, mariadb-embedded, mariadb-embedded-debuginfo, mariadb-embedded-devel, mariadb-errmsg, mariadb-gssapi-server, mariadb-gssapi-server-debuginfo, mariadb-oqgraph-engine, mariadb-oqgraph-engine-debuginfo, mariadb-server, mariadb-server-debuginfo, mariadb-server-galera, mariadb-server-utils, mariadb-server-utils-debuginfo, mariadb-test, mariadb-test-debuginfo python38-devel 3.8 pytest, python-atomicwrites, python-attrs, python-more-itertools, python-packaging, python-pluggy, python-py, python-wcwidth, python38-atomicwrites, python38-attrs, python38-more-itertools, python38-packaging, python38-pluggy, python38-py, python38-pyparsing, python38-pytest, python38-wcwidth, python3x-pyparsing python39-devel 3.9 Cython, Cython-debugsource, pybind11, pytest, python-attrs, python-iniconfig, python-more-itertools, python-packaging, python-pluggy, python-py, python-wcwidth, python39-attrs, python39-Cython, python39-Cython-debuginfo, python39-debug, python39-iniconfig, python39-more-itertools, python39-packaging, python39-pluggy, python39-py, python39-pybind11, python39-pybind11-devel, python39-pyparsing, python39-pytest, python39-wcwidth, python3x-pyparsing subversion-devel 1.10 libserf, libserf-debuginfo, libserf-debugsource, libserf-devel, mod_dav_svn, mod_dav_svn-debuginfo, subversion, subversion-debuginfo, subversion-debugsource, subversion-devel, subversion-devel-debuginfo, subversion-gnome, subversion-gnome-debuginfo, subversion-libs, subversion-libs-debuginfo, subversion-perl, subversion-perl-debuginfo, subversion-ruby, subversion-ruby-debuginfo, subversion-tools, subversion-tools-debuginfo, utf8proc, utf8proc-debuginfo, utf8proc-debugsource, utf8proc-devel virt-devel rhel hivex, hivex-debuginfo, hivex-debugsource, hivex-devel, libguestfs-winsupport, libiscsi, libiscsi-debuginfo, libiscsi-debugsource, libiscsi-devel, libiscsi-utils, libiscsi-utils-debuginfo, libnbd, libnbd-debuginfo, libnbd-debugsource, libnbd-devel, libvirt, libvirt-client, libvirt-client-debuginfo, libvirt-daemon, libvirt-daemon-config-network, libvirt-daemon-config-nwfilter, libvirt-daemon-debuginfo, libvirt-daemon-driver-interface, libvirt-daemon-driver-interface-debuginfo, libvirt-daemon-driver-network, libvirt-daemon-driver-network-debuginfo, libvirt-daemon-driver-nodedev, libvirt-daemon-driver-nodedev-debuginfo, libvirt-daemon-driver-nwfilter, libvirt-daemon-driver-nwfilter-debuginfo, libvirt-daemon-driver-secret, libvirt-daemon-driver-secret-debuginfo, libvirt-daemon-driver-storage, libvirt-daemon-driver-storage-core, libvirt-daemon-driver-storage-core-debuginfo, libvirt-daemon-driver-storage-disk, libvirt-daemon-driver-storage-disk-debuginfo, libvirt-daemon-driver-storage-iscsi, libvirt-daemon-driver-storage-iscsi-debuginfo, libvirt-daemon-driver-storage-iscsi-direct, libvirt-daemon-driver-storage-iscsi-direct-debuginfo, libvirt-daemon-driver-storage-logical, libvirt-daemon-driver-storage-logical-debuginfo, libvirt-daemon-driver-storage-mpath, libvirt-daemon-driver-storage-mpath-debuginfo, libvirt-daemon-driver-storage-scsi, libvirt-daemon-driver-storage-scsi-debuginfo, libvirt-dbus, libvirt-dbus-debuginfo, libvirt-dbus-debugsource, libvirt-debuginfo, libvirt-debugsource, libvirt-devel, libvirt-docs, libvirt-libs, libvirt-libs-debuginfo, libvirt-nss, libvirt-nss-debuginfo, libvirt-python-debugsource, libvirt-wireshark, libvirt-wireshark-debuginfo, nbdfuse, nbdfuse-debuginfo, netcf, netcf-debuginfo, netcf-debugsource, netcf-devel, netcf-libs, netcf-libs-debuginfo, ocaml-hivex, ocaml-hivex-debuginfo, ocaml-hivex-devel, ocaml-libguestfs, ocaml-libguestfs-debuginfo, ocaml-libguestfs-devel, ocaml-libnbd, ocaml-libnbd-debuginfo, ocaml-libnbd-devel, perl-hivex, perl-hivex-debuginfo, perl-Sys-Virt, perl-Sys-Virt-debuginfo, perl-Sys-Virt-debugsource, python3-hivex, python3-hivex-debuginfo, python3-libnbd, python3-libnbd-debuginfo, python3-libvirt, python3-libvirt-debuginfo, qemu-kvm-tests, ruby-hivex, ruby-hivex-debuginfo, seabios, sgabios, SLOF, virt-v2v | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/package_manifest/codereadylinuxbuilder-repository |
32.9.2. Making the Kickstart File Available on the Network | 32.9.2. Making the Kickstart File Available on the Network Network installations using kickstart are quite common, because system administrators can quickly and easily automate the installation on many networked computers. In general, the approach most commonly used is for the administrator to have both a BOOTP/DHCP server and an NFS server on the local network. The BOOTP/DHCP server is used to give the client system its networking information, while the actual files used during the installation are served by the NFS server. Often, these two servers run on the same physical machine, but they are not required to. Include the ks kernel boot option in the append line of a target in your pxelinux.cfg/default file to specify the location of a kickstart file on your network. The syntax of the ks option in a pxelinux.cfg/default file is identical to its syntax when used at the boot prompt. Refer to Section 32.11, "Starting a Kickstart Installation" for a description of the syntax and refer to Example 32.1, "Using the ks option in the pxelinux.cfg/default file" for an example of an append line. If the dhcpd.conf file on the DHCP server is configured to point to /var/lib/tftpboot/pxelinux.0 on the BOOTP server (whether on the same physical machine or not), systems configured to boot over the network can load the kickstart file and commence installation. Example 32.1. Using the ks option in the pxelinux.cfg/default file For example, if foo.ks is a kickstart file available on an NFS share at 192.168.0.200:/export/kickstart/ , part of your pxelinux.cfg/default file might include: | [
"label 1 kernel RHEL6/vmlinuz append initrd=RHEL6/initrd.img ramdisk_size=10000 ks=nfs:192.168.0.200:/export/kickstart/foo.ks"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-kickstart2-networkbased |
Chapter 11. Managing container images | Chapter 11. Managing container images With Satellite, you can import container images from various sources and distribute them to external containers by using content views. For information about containers for Red Hat Enterprise Linux Atomic Host 7, see Getting Started with Containers in Red Hat Enterprise Linux Atomic Host 7 . For information about containers for Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux 8 Building, running, and managing containers . For information about containers for Red Hat Enterprise Linux 9, see Red Hat Enterprise Linux 9 Building, running, and managing containers . 11.1. Importing container images You can import container image repositories from Red Hat Registry or from other image registries. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure with repository discovery In the Satellite web UI, navigate to Content > Products and click Repo Discovery . From the Repository Type list, select Container Images . In the Registry to Discover field, enter the URL of the registry to import images from. In the Registry Username field, enter the name that corresponds with your user name for the container image registry. In the Registry Password field, enter the password that corresponds with the user name that you enter. In the Registry Search Parameter field, enter any search criteria that you want to use to filter your search, and then click Discover . Optional: To further refine the Discovered Repository list, in the Filter field, enter any additional search criteria that you want to use. From the Discovered Repository list, select any repositories that you want to import, and then click Create Selected . Optional: To change the download policy for this container repository to on demand , see Section 4.11, "Changing the download policy for a repository" . Optional: If you want to create a product, from the Product list, select New Product . In the Name field, enter a product name. Optional: In the Repository Name and Repository Label columns, you can edit the repository names and labels. Click Run Repository Creation . When repository creation is complete, you can click each new repository to view more information. Optional: To filter the content you import to a repository, click a repository, and then navigate to Limit Sync Tags . Click to edit, and add any tags that you want to limit the content that synchronizes to Satellite. In the Satellite web UI, navigate to Content > Products and select the name of your product. Select the new repositories and then click Sync Now to start the synchronization process. Procedure with creating a repository manually In the Satellite web UI, navigate to Content > Products . Click the name of the required product. Click New repository . From the Type list, select docker . Enter the details for the repository, and click Save . Select the new repository, and click Sync Now . steps To view the progress of the synchronization, navigate to Content > Sync Status and expand the repository tree. When the synchronization completes, you can click Container Image Manifests to list the available manifests. From the list, you can also remove any manifests that you do not require. CLI procedure Create the custom Red Hat Container Catalog product: Create the repository for the container images: Synchronize the repository: Additional resources For more information about creating a product and repository manually, see Chapter 4, Importing content . 11.2. Managing container name patterns When you use Satellite to create and manage your containers, as the container moves through content view versions and different stages of the Satellite lifecycle environment, the container name changes at each stage. For example, if you synchronize a container image with the name ssh from an upstream repository, when you add it to a Satellite product and organization and then publish as part of a content view, the container image can have the following name: my_organization_production-custom_spin-my_product-custom_ssh . This can create problems when you want to pull a container image because container registries can contain only one instance of a container name. To avoid problems with Satellite naming conventions, you can set a registry name pattern to override the default name to ensure that your container name is clear for future use. Limitations If you use a registry name pattern to manage container naming conventions, because registry naming patterns must generate globally unique names, you might experience naming conflict problems. For example: If you set the repository.docker_upstream_name registry name pattern, you cannot publish or promote content views with container content with identical repository names to the Production lifecycle. If you set the lifecycle_environment.name registry name pattern, this can prevent the creation of a second container repository with the identical name. You must proceed with caution when defining registry naming patterns for your containers. Procedure To manage container naming with a registry name pattern, complete the following steps: In the Satellite web UI, navigate to Content > Lifecycle > Lifecycle Environments . Create a lifecycle environment or select an existing lifecycle environment to edit. In the Container Image Registry area, click the edit icon to the right of Registry Name Pattern area. Use the list of variables and examples to determine which registry name pattern you require. In the Registry Name Pattern field, enter the registry name pattern that you want to use. For example, to use the repository.docker_upstream_name : Click Save . 11.3. Managing container registry authentication You can manage the authentication settings for accessing containers images from Satellite. By default, users must authenticate to access containers images in Satellite. You can specify whether you want users to authenticate to access container images in Satellite in a lifecycle environment. For example, you might want to permit users to access container images from the Production lifecycle without any authentication requirement and restrict access the Development and QA environments to authenticated users. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Lifecycle Environments . Select the lifecycle environment that you want to manage authentication for. To permit unauthenticated access to the containers in this lifecycle environment, select the Unauthenticated Pull checkbox. To restrict unauthenticated access, clear the Unauthenticated Pull checkbox. Click Save . 11.4. Configuring Podman and Docker to trust the certificate authority Podman uses two paths to locate the CA file, namely, /etc/containers/certs.d/ and /etc/docker/certs.d/ . Copy the root CA file to one of these locations, with the exact path determined by the server hostname, and naming the file ca.crt In the following examples, replace hostname.example.com with satellite.example.com or capsule.example.com , depending on your use case. You might first need to create the relevant location using: or For podman, use: Alternatively, if you are using Docker, copy the root CA file to the equivalent Docker directory: You no longer need to use the --tls-verify=false option when logging in to the registry: 11.5. Using container registries You can use Podman and Docker to fetch content from container registries and push the content to the Satellite container registry. The Satellite registry follows the Open Containers Initiative (OCI) specification, so you can push content to Satellite by using the same methods that apply to other registries. For more information about OCI, see Open Container Initiative Distribution Specification . Prerequisites To push content to Satellite, ensure your Satellite account has the edit_products permission. Ensure that a product exists before pushing a repository. For more information, see Section 4.4, "Creating a custom product" . To pull content from Satellite, ensure that your Satellite account has the view_lifecycle_environments , view_products , and view_content_views permissions, unless the lifecycle environment allows unauthenticated pull. Container registries on Capsules On Capsules with content, the Container Gateway Capsule plugin acts as the container registry. It caches authentication information from Katello and proxies incoming requests to Pulp. The Container Gateway is available by default on Capsules with content. Considerations for pushing content to the Satellite container registry You can only push content to the Satellite Server itself. If you need pushed content on Capsule Servers as well, use Capsule syncing. The pushed container registry name must contain only lowercase characters. Unless pushed repositories are published in a content view version, they do not follow the registry name pattern. For more information, see Section 11.2, "Managing container name patterns" . This is to ensure that users can push and pull from the same path. Users are required to push and pull from the same path. If you use the label-based schema, pull using labels. If you use the ID-based schema, pull using IDs. Procedure Logging in to the container registry: Listing container images: Pulling container images: Pushing container images to the Satellite container registry: To indicate which organization, product, and repository the container image belongs to, include the organization and product in the container registry name. You can address the container destination by using one of the following schemas: After the content push has completed, a repository is created in Satellite. | [
"hammer product create --description \" My_Description \" --name \"Red Hat Container Catalog\" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"",
"hammer repository create --content-type \"docker\" --docker-upstream-name \"rhel7\" --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\" --url \"http://registry.access.redhat.com/\"",
"hammer repository synchronize --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\"",
"<%= repository.docker_upstream_name %>",
"mkdir -p /etc/containers/certs.d/hostname.example.com",
"mkdir -p /etc/docker/certs.d/hostname.example.com",
"cp rootCA.pem /etc/containers/certs.d/hostname.example.com/ca.crt",
"cp rootCA.pem /etc/docker/certs.d/hostname.example.com/ca.crt",
"podman login hostname.example.com Username: admin Password: Login Succeeded!",
"podman login satellite.example.com",
"podman search satellite.example.com/",
"podman pull satellite.example.com/my-image:<optional_tag>",
"podman push My_Container_Image_Hash satellite.example.com / My_Organization_Label / My_Product_Label / My_Repository_Name [:_My_Tag_] podman push My_Container_Image_Hash satellite.example.com /id/ My_Organization_ID / My_Product_ID / My_Repository_Name [:_My_Tag_]"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/Managing_Container_Images_content-management |
Chapter 4. Network Observability Operator in OpenShift Container Platform | Chapter 4. Network Observability Operator in OpenShift Container Platform Network Observability is an OpenShift operator that deploys a monitoring pipeline to collect and enrich network traffic flows that are produced by the Network Observability eBPF agent. 4.1. Viewing statuses The Network Observability Operator provides the Flow Collector API. When a Flow Collector resource is created, it deploys pods and services to create and store network flows in the Loki log store, as well as to display dashboards, metrics, and flows in the OpenShift Container Platform web console. Procedure Run the following command to view the state of FlowCollector : USD oc get flowcollector/cluster Example output Check the status of pods running in the netobserv namespace by entering the following command: USD oc get pods -n netobserv Example output flowlogs-pipeline pods collect flows, enriches the collected flows, then send flows to the Loki storage. netobserv-plugin pods create a visualization plugin for the OpenShift Container Platform Console. Check the status of pods running in the namespace netobserv-privileged by entering the following command: USD oc get pods -n netobserv-privileged Example output netobserv-ebpf-agent pods monitor network interfaces of the nodes to get flows and send them to flowlogs-pipeline pods. If you are using the Loki Operator, check the status of pods running in the openshift-operators-redhat namespace by entering the following command: USD oc get pods -n openshift-operators-redhat Example output 4.2. Network Observablity Operator architecture The Network Observability Operator provides the FlowCollector API, which is instantiated at installation and configured to reconcile the eBPF agent , the flowlogs-pipeline , and the netobserv-plugin components. Only a single FlowCollector per cluster is supported. The eBPF agent runs on each cluster node with some privileges to collect network flows. The flowlogs-pipeline receives the network flows data and enriches the data with Kubernetes identifiers. If you choose to use Loki, the flowlogs-pipeline sends flow logs data to Loki for storing and indexing. The netobserv-plugin , which is a dynamic OpenShift Container Platform web console plugin, queries Loki to fetch network flows data. Cluster-admins can view the data in the web console. If you do not use Loki, you can generate metrics with Prometheus. Those metrics and their related dashboards are accessible in the web console. For more information, see "Network Observability without Loki". If you are using the Kafka option, the eBPF agent sends the network flow data to Kafka, and the flowlogs-pipeline reads from the Kafka topic before sending to Loki, as shown in the following diagram. Additional resources Network Observability without Loki 4.3. Viewing Network Observability Operator status and configuration You can inspect the status and view the details of the FlowCollector using the oc describe command. Procedure Run the following command to view the status and configuration of the Network Observability Operator: USD oc describe flowcollector/cluster | [
"oc get flowcollector/cluster",
"NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready",
"oc get pods -n netobserv",
"NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m",
"oc get pods -n netobserv-privileged",
"NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h",
"oc describe flowcollector/cluster"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_observability/nw-network-observability-operator |
Chapter 6. Deploy the edge without storage | Chapter 6. Deploy the edge without storage You can deploy a distributed compute node (DCN) cluster without block storage at edge sites if you use the Object Storage service (swift) as a back end for the Image service (glance) at the central location. If you deploy a site without block storage, you cannot update it later to have block storage. Use the compute role when deploying the edge site without storage. Important The following procedure uses lvm as the back end for the Block Storage service (cinder), which is not supported for production. You must deploy a certified block storage solution as a back end for the Block Storage service. 6.1. Architecture of a DCN edge site without storage To deploy this architecture, use the Compute role. Without block storage at the edge The Object Storage (swift) service at the control plane is used as an Image (glance) service backend. Multi-backend image service is not available. Images are cached locally at edge sites in Nova. For more information see Chapter 10, Precaching glance images into nova . The instances are stored locally on the Compute nodes. Volume services such as Block Storage (cinder) are not available at edge sites. Important If you do not deploy the central location with Red Hat Ceph storage, you will not have the option of deploying an edge site with storage at a later time. For more information about deploying without block storage at the edge, see Section 6.2, "Deploying edge nodes without storage" . 6.2. Deploying edge nodes without storage When you deploy Compute nodes at an edge site, you use the central location as the control plane. You can add a new DCN stack to your deployment and reuse the configuration files from the central location to create new environment files. Prerequisites You must create the network_data.yaml file specific to your environment. You can find sample files in /usr/share/openstack-tripleo-heat-templates/network-data-samples . You must create an overcloud-baremetal-deploy.yaml file specific to your environment. For more information see Provisioning bare metal nodes for the overcloud . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate an environment file ~/dcn0/dcn0-images-env.yaml[d]: Generate a roles file for the edge location. Generate roles for the edge location using roles appropriate for your environment: If you are using ML2/OVS for networking overlay, you must edit the Compute role include the NeutronDhcpAgent and NeutronMetadataAgent services: Create a role file for the Compute role: Edit the /home/stack/dcn0/dcn0_roles.yaml file to include the NeutronDhcpAgent and NeutronMetadataAgent services: For more information, see Preparing for a routed provider network . Provision networks for the overcloud. This command takes a definition file for overcloud networks as input. You must use the output file in your command to deploy the overcloud: Provision bare metal instances. This command takes a definition file for bare metal nodes as input. You must use the output file in your command to deploy the overcloud: Configure the naming conventions for your site in the site-name.yaml environment file. Deploy the stack for the dcn0 edge site: 6.3. Excluding specific image types at the edge By default, Compute nodes advertise all image formats that they support. If your Compute nodes do not use Ceph storage, you can exclude RAW images from the image format advertisement. The RAW image format consumes more network bandwidth and local storage than QCOW2 images and is inefficient when used at edge sites without Ceph storage. Use the NovaImageTypeExcludeList parameter to exclude specific image formats: Important Do not use this parameter at edge sites with Ceph, because Ceph requires RAW images. Note Compute nodes that do not advertise RAW images cannot host instances created from RAW images. This can affect snapshot-redeploy and shelving. Prerequisites Red Hat OpenStack Platform director is installed The central location is installed Compute nodes are available for a DCN deployment Procedure Log in to the undercloud host as the stack user. Source the stackrc credentials file: Include the NovaImageTypeExcludeList parameter in one of your custom environment files: Include the environment file that contains the NovaImageTypeExcludeList parameter in the overcloud deployment command, along with any other environment files relevant to your deployment: | [
"[stack@director ~]USD source ~/stackrc",
"sudo[e] openstack tripleo container image prepare -e containers.yaml --output-env-file ~/dcn0/dcn0-images-env.yaml",
"(undercloud)USD openstack overcloud roles generate Compute -o /home/stack/dcn0/dcn0_roles.yaml",
"openstack overcloud roles generate Compute -o /home/stack/dcn0/dcn0_roles.yaml",
"- OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe + - OS::TripleO::Services::NeutronDhcpAgent + - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NovaAZConfig - OS::TripleO::Services::NovaCompute",
"(undercloud)USD openstack overcloud network provision --output /home/stack/dcn0/overcloud-networks-deployed.yaml /home/stack/dcn0/network_data.yaml",
"(undercloud)USD openstack overcloud node provision --stack dcn0 --network-config -o /home/stack/dcn0/deployed_metal.yaml ~/overcloud-baremetal-deploy.yaml",
"parameter_defaults: NovaComputeAvailabilityZone: dcn0 ControllerExtraConfig: nova::availability_zone::default_schedule_zone: dcn0 NovaCrossAZAttach: false",
"openstack overcloud deploy --deployed-server --stack dcn0 --templates /usr/share/openstack-tripleo-heat-templates/ -r /home/stack/dcn0/dcn0_roles.yaml -n /home/stack/network_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e /home/stack/overcloud-deploy/central/central-export.yaml -e /home/stack/dcn0/overcloud-networks-deployed.yaml -e /home/stack/dcn0/overcloud-vip-deployed.yaml -e /home/stack/dcn0/deployed_metal.yaml",
"source ~/stackrc",
"parameter_defaults: NovaImageTypeExcludeList: - raw",
"openstack overcloud deploy --templates -n network_data.yaml -r roles_data.yaml -e <environment_files> -e <new_environment_file>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/assembly_deploy-edge-without-storage |
Chapter 1. About node remediation, fencing, and maintenance | Chapter 1. About node remediation, fencing, and maintenance Hardware is imperfect and software contains bugs. When node-level failures, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. However, some workloads, such as ReadWriteOnce (RWO) volumes and StatefulSets, might require at-most-one semantics. Failures affecting these workloads risk data loss, corruption, or both. It is important to ensure that the node reaches a safe state, known as fencing before initiating recovery of the workload, known as remediation and ideally, recovery of the node also. It is not always practical to depend on administrator intervention to confirm the true status of the nodes and workloads. To facilitate such intervention, Red Hat OpenShift provides multiple components for the automation of failure detection, fencing and remediation. 1.1. Self Node Remediation The Self Node Remediation Operator is a Red Hat OpenShift add-on Operator that implements an external system of fencing and remediation that reboots unhealthy nodes and deletes resources, such as Pods and VolumeAttachments. The reboot ensures that the workloads are fenced, and the resource deletion accelerates the rescheduling of affected workloads. Unlike other external systems, Self Node Remediation does not require any management interface, like, for example, Intelligent Platform Management Interface (IPMI) or an API for node provisioning. Self Node Remediation can be used by failure detection systems, like Machine Health Check or Node Health Check. 1.2. Fence Agents Remediation The Fence Agents Remediation (FAR) Operator is a Red Hat OpenShift add-on operator that automatically remediates unhealthy nodes, similar to the Self Node Remediation Operator. You can use well-known agents to fence and remediate unhealthy nodes. The remediation includes rebooting the unhealthy node using a fence agent, and then evicting workloads from the unhealthy node, depending on the remediation strategy . 1.3. Machine Deletion Remediation The Machine Deletion Remediation (MDR) Operator is a Red Hat OpenShift add-on Operator that uses the Machine API to reprovision unhealthy nodes. MDR works with NodeHealthCheck (NHC) to create a Custom Resource (CR) for MDR with information about the unhealthy node. MDR follows the annotation on the node to the associated machine object and confirms that it has an owning controller. MDR proceeds to delete the machine, and then the owning controller recreates a replacement machine. 1.4. Machine Health Check Machine Health Check utilizes a Red Hat OpenShift built-in failure detection, fencing and remediation system, which monitors the status of machines and the conditions of nodes. Machine Health Checks can be configured to trigger external fencing and remediation systems, like Self Node Remediation. 1.5. Node Health Check The Node Health Check Operator is a Red Hat OpenShift add-on Operator that implements a failure detection system that monitors node conditions. It does not have a built-in fencing or remediation system and so must be configured with an external system that provides these features. By default, it is configured to utilize the Self Node Remediation system. 1.6. Node Maintenance Administrators face situations where they need to interrupt the cluster, for example, replace a drive, RAM, or a NIC. In advance of this maintenance, affected nodes should be cordoned and drained. When a node is cordoned, new workloads cannot be scheduled on that node. When a node is drained, to avoid or minimize downtime, workloads on the affected node are transferred to other nodes. While this maintenance can be achieved using command line tools, the Node Maintenance Operator offers a declarative approach to achieve this by using a custom resource. When such a resource exists for a node, the Operator cordons and drains the node until the resource is deleted. 1.7. Flow of events during fencing and remediation When a node becomes unhealthy multiple events occur to detect, fence, and remediate the node, in order to restore workloads, and ideally the node, to health. Some events are triggered by the OpenShift cluster, and some events are reactions by the Workload Availability operators. Understanding this flow of events, and the duration between these events, is important to make informed decisions. Decisions including, which remediation provider to use, and how to configure Node Health Check Operator and the chosen remediaton provider. Note The example outlined is a common use case that outlines the phased flow of events. Only when the Node Health Check Operator has the Ready=Unknown unhealthy conditions do the phases act as follows. 1.7.1. Phase 1 - Kubernetes Health Check (Core OpenShift) The unhealthy node stops communicating with the API server. After approximately 50 seconds the API server sets the "Ready" condition of the node to "Unknown", that is, Ready=Unknown . 1.7.2. Phase 2 - Node Health Check (NHC) If the Ready=Unknown condition is present longer than the configured duration, it starts a remediation. The user-configured duration in this phase represents the tolerance that the Operator has towards the duration of the unhealthy condition. It takes into account that while the workload is restarting as requested, the resource is expected to be "Unready". For example: If you have a workload that takes a long time to restart, then you need to have a longer timeout. Likewise, when the workload restart is short, then the timeout needed is also short. 1.7.3. Phase 3 - Remediate Host / Remediate API (depending on the configured remediation operator) Using Machine Deletion Remediation (MDR), Self Node Remediation (SNR) or Fence Agents Remediation (FAR), the remediator fences and isolates the node by rebooting it in order to reach a safe state. The details of this phase are configured by the user and depends on their workload requirements. For example: Machine Deletion Remediation - The choice of platform influences the time it takes to reprovision the machine, and then the duration of the remediation. MDR is only applicable to clusters that use Machine API. Self Node Remediation - The remediation time depends on many parameters, including the safe time is takes to automatically reboot unhealthy nodes, and the watchdog devices used to ensure that the machine enters a safe state when an error condition is detected. Fence Agents Remediation - The fencing agent time depends on many parameters including, the cluster nodes, the management interface, and the agent parameters. 1.7.4. Phase 4 - Workload starting When MDR is used, the remediator deletes the resources. When FAR and SNR are used, varying remediation strategies are available for them to use. One strategy is OutOfServiceTaint which uses out-of-service taint to permit the deletion of resources in the cluster. In both cases, deleting the resources enables faster rescheduling of the affected workload. The workload is then rescheduled and restarted. This phase is initiated automatically by the remediators when fencing is complete. If fencing does not complete, and an escalation remediation is required, the user must configure the timeout in seconds for the entire remediation process. If the timeout passes, and the node is still unhealthy, NHC will try the remediator in line to remediate the unhealthy node. 1.8. About metrics for workload availability operators The addition of data analysis enhances observability for the workload availability operators. The data provides metrics about the activity of the operators, and the effect on the cluster. These metrics improve decision-making capabilities, enable data-driven optimization, and enhance overall system performance. You can use metrics to do these tasks: Access comprehensive tracking data for operators, to monitor overall system efficiency. Access actionable insights derived from tracking data, such as identifying frequently failing nodes, or downtime due to operator's remediations. Visualize how the operator's remediations are actually enhancing the system efficiency. 1.8.1. Configuring metrics for workload availability operators You can use the Red Hat OpenShift web console to install the Node Health Check Operator. Prerequisites You must first configure the monitoring stack. For more information, see Configuring the monitoring stack . You must enable monitoring for used-defined projects. For more information, see Enabling monitoring for used-defined projects . Procedure Create the prometheus-user-token secret from the existing prometheus-user-workload-token secret as follows: existingPrometheusTokenSecret=USD(kubectl get secret --namespace openshift-user-workload-monitoring | grep prometheus-user-workload-token | awk '{print USD1}') 1 kubectl get secret USD{existingPrometheusTokenSecret} --namespace=openshift-user-workload-monitoring -o yaml | \ sed '/namespace: .*==/d;/ca.crt:/d;/serviceCa.crt/d;/creationTimestamp:/d;/resourceVersion:/d;/uid:/d;/annotations/d;/kubernetes.io/d;' | \ sed 's/namespace: .*/namespace: openshift-workload-availability/' | \ 2 sed 's/name: .*/name: prometheus-user-workload-token/' | \ 3 sed 's/type: .*/type: Opaque/' | \ > prom-token.yaml kubectl apply -f prom-token.yaml 1 The prometheus-user-token is required by the Metric ServiceMonitor, created in the step. 2 Ensure the new Secret's namespace is the one where NHC Operator is installed, for example, openshift-workload-availability . 3 The prometheus-user-workload-token only exists if User Worload Prometheus scraping is enabled. Create the ServiceMonitor as follows: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: node-healthcheck-metrics-monitor namespace: openshift-workload-availability 1 labels: app.kubernetes.io/component: controller-manager spec: endpoints: - interval: 30s port: https scheme: https authorization: type: Bearer credentials: name: prometheus-user-workload-token key: token tlsConfig: ca: configMap: name: nhc-serving-certs-ca-bundle key: service-ca.crt serverName: node-healthcheck-controller-manager-metrics-service.openshift-workload-availability.svc 2 selector: matchLabels: app.kubernetes.io/component: controller-manager app.kubernetes.io/name: node-healthcheck-operator app.kubernetes.io/instance: metrics 1 Specify the namespace where you want to configure the metrics, for example, openshift-workload-availability . 2 The serverName must contain the same namespace where the Operator is installed. In the example, openshift-workload-availability is placed after the metrics service name and before the filetype extension. Verification To confirm that the configuration is successful the Observe > Targets tab in OCP Web UI shows Endpoint Up . 1.8.2. Example metrics for workload availability operators The following are example metrics from the various workload availability operators. The metrics include information on the following indicators: Operator availability: Showing if and when each Operator is up and running. Node remediation count: Showing the number of remediations across the same node, and across all nodes. Node remediation duration: Showing the remediation downtime or recovery time. Node remediation gauge: Showing the number of ongoing remediations. | [
"existingPrometheusTokenSecret=USD(kubectl get secret --namespace openshift-user-workload-monitoring | grep prometheus-user-workload-token | awk '{print USD1}') 1 get secret USD{existingPrometheusTokenSecret} --namespace=openshift-user-workload-monitoring -o yaml | sed '/namespace: .*==/d;/ca.crt:/d;/serviceCa.crt/d;/creationTimestamp:/d;/resourceVersion:/d;/uid:/d;/annotations/d;/kubernetes.io/d;' | sed 's/namespace: .*/namespace: openshift-workload-availability/' | \\ 2 sed 's/name: .*/name: prometheus-user-workload-token/' | \\ 3 sed 's/type: .*/type: Opaque/' | > prom-token.yaml apply -f prom-token.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: node-healthcheck-metrics-monitor namespace: openshift-workload-availability 1 labels: app.kubernetes.io/component: controller-manager spec: endpoints: - interval: 30s port: https scheme: https authorization: type: Bearer credentials: name: prometheus-user-workload-token key: token tlsConfig: ca: configMap: name: nhc-serving-certs-ca-bundle key: service-ca.crt serverName: node-healthcheck-controller-manager-metrics-service.openshift-workload-availability.svc 2 selector: matchLabels: app.kubernetes.io/component: controller-manager app.kubernetes.io/name: node-healthcheck-operator app.kubernetes.io/instance: metrics"
]
| https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/25.1/html/remediation_fencing_and_maintenance/about-remediation-fencing-maintenance |
Chapter 3. Configuring the HA cluster to manage the SAP HANA Scale-Up System Replication setup | Chapter 3. Configuring the HA cluster to manage the SAP HANA Scale-Up System Replication setup Please refer to the following documentation for general guidance on setting up pacemaker-based HA clusters on RHEL: Configuring and managing high availability clusters on RHEL 8 Support Policies for RHEL High Availability Clusters The remainder of this guide will assume that the following things are configured and working properly: The basic HA cluster is configured according to the official Red Hat documentation and has proper and working fencing (please see the support policies for Fencing/STONITH for guidelines on which fencing mechanism to use according to the platform the setup is running on. Note Using fence_scsi/fence_mpath as fencing/STONITH mechanism is not supported for this solution, since there is no shared storage that is accessed by the SAP HANA instances managed by the HA cluster. SAP HANA System Replication has been configured and it has been verified that manual takeovers between the SAP HANA instances are working correctly. Automatic startup on boot of the SAP HANA instances is disabled on all HA cluster nodes (the start and stop of the SAP HANA instances will be managed by the HA cluster). Note If the SAP HANA instances that will be managed by the HA cluster are systemd enabled (SAP HANA 2.0 SPS07 and later), additional configuration changes are required to ensure that systemd does not interfere with the management of the SAP instances by the HA cluster. Please check out section 2. Red Hat HA Solutions for SAP in The Systemd-Based SAP Startup Framework for information. 3.1. Installing resource agents and other components required for managing SAP HANA Scale-Up System Replication using the RHEL HA Add-On The resource agents and other SAP HANA specific components required for setting up an HA cluster for managing SAP HANA Scale-Up System Replication setup are provided via the resource-agents-sap-hana RPM package from the "RHEL for SAP Solutions" repo. To install the package please use the following command: [root]# dnf install resource-agents-sap-hana 3.2. Enabling the SAP HANA srConnectionChanged() hook As documented in SAP's Implementing a HA/DR Provider , recent versions of SAP HANA provide so-called "hooks" that allow SAP HANA to send out notifications for certain events. The srConnectionChanged() hook can be used to improve the ability of the HA cluster to detect when a change in the status of the SAP HANA System Replication occurs that requires the HA cluster to take action and to avoid data loss/data corruption by preventing accidental takeovers from being triggered in situations where this should be avoided. Note When using SAP HANA 2.0 SPS0 or later and a version of the resource-agents-sap-hana package that provides the components for supporting the srConnectionChanged() hook, it is mandatory to enable the hook before proceeding with the HA cluster setup. 3.2.1. Verifying the version of the resource-agents-sap-hana package Please verify that the correct version of the resource-agents-sap-hana package providing the components required to enable the srConnectionChanged() hook for your version of RHEL 8 is installed as documented in the following article: How can the srConnectionChanged() hook be used to improve the detection of situations where a takeover is required, in a Red Hat Pacemaker cluster managing HANA Scale-up or Scale-out System Replication? 3.2.2. Activating the srConnectionChanged() hook on all SAP HANA instances Note The steps to activate the srConnectionChanged() hook need to be performed for each SAP HANA instance on all HA cluster nodes. Stop the HA cluster on both nodes (this command only needs to be run on one HA cluster node): [root]# pcs cluster stop --all Verify that all SAP HANA instances are stopped completely. Update the SAP HANA global.ini file on each node to enable use of the hook script by both SAP HANA instances (e.g., in file /hana/shared/RH1/global/hdb/custom/config/global.ini ): [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /usr/share/SAPHanaSR/srHook execution_order = 1 [trace] ha_dr_saphanasr = info On each HA cluster node, create the file /etc/sudoers.d/20-saphana by running the following command and adding the contents below to allow the hook script to update the node attributes when the srConnectionChanged() hook is called. [root]# visudo -f /etc/sudoers.d/20-saphana Cmnd_Alias DC1_SOK = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC1 -v SOK -t crm_config -s SAPHanaSR Cmnd_Alias DC1_SFAIL = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC1 -v SFAIL -t crm_config -s SAPHanaSR Cmnd_Alias DC2_SOK = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SOK -t crm_config -s SAPHanaSR Cmnd_Alias DC2_SFAIL = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SFAIL -t crm_config -s SAPHanaSR rh1adm ALL=(ALL) NOPASSWD: DC1_SOK, DC1_SFAIL, DC2_SOK, DC2_SFAIL Defaults!DC1_SOK, DC1_SFAIL, DC2_SOK, DC2_SFAIL !requiretty Replace rh1 with the lowercase SID of your SAP HANA installation and replace DC1 and DC2 with your SAP HANA site names. For further information on why the Defaults setting is needed, refer to The srHook attribute is set to SFAIL in a Pacemaker cluster managing SAP HANA system replication, even though replication is in a healthy state . Start the SAP HANA instances on both HA cluster nodes manually without starting the HA cluster: [rh1adm]USD HDB start Verify that the hook script is working as expected. Perform some action to trigger the hook, such as stopping a SAP HANA instance. Then check whether the hook logged anything using a method such as the one below: [rh1adm]USD cdtrace [rh1adm]USD awk '/ha_dr_SAPHanaSR.*crm_attribute/ { printf "%s %s %s %s\n",USD2,USD3,USD5,USD16 }' nameserver_* 2018-05-04 12:34:04.476445 ha_dr_SAPHanaSR SFAIL 2018-05-04 12:53:06.316973 ha_dr_SAPHanaSR SOK [rh1adm]# grep ha_dr_ * Note For more information on how to verify that the SAP HANA hook is working correctly, please check the SAP documentation: Install and Configure a HA/DR Provider Script . When the functionality of the hook has been verified, the HA cluster can be started again. [root]# pcs cluster start --all 3.3. Configuring general HA cluster properties To avoid unnecessary failovers of the resources, the following default values for the resource-stickiness and migration-threshold parameters must be set (this only needs to be done on one node): [root]# pcs resource defaults resource-stickiness=1000 [root]# pcs resource defaults migration-threshold=5000 Note As of RHEL 8.4 ( pcs-0.10.8-1.el8 ), the commands above are deprecated. Use the following commands instead. [root]# pcs resource defaults update resource-stickiness=1000 [root]# pcs resource defaults update migration-threshold=5000 resource-stickiness=1000 will encourage the resource to stay running where it is, while migration-threshold=5000 will cause the resource to move to a new node only after 5000 failures. 5000 is generally sufficient to prevent the resource from prematurely failing over to another node. This also ensures that the resource failover time stays within a controllable limit. 3.4. Creating cloned SAPHanaTopology resource The SAPHanaTopology resource agent gathers information about the status and configuration of SAP HANA System Replication on each node. In addition, it starts and monitors the local SAP HostAgent , which is required for starting, stopping, and monitoring the SAP HANA instances. The SAPHanaTopology resource agent has the following attributes: Attribute Name Required? Default value Description SID yes null The SAP System Identifier (SID) of the SAP HANA installation (must be identical for all nodes). Example: RH1 InstanceNumber yes null The Instance Number of the SAP HANA installation (must be identical for all nodes). Example: 02 Below is an example command to create the SAPHanaTopology cloned resource. [root]# pcs resource create SAPHanaTopology_RH1_02 SAPHanaTopology SID=RH1 InstanceNumber=02 \ op start timeout=600 \ op stop timeout=300 \ op monitor interval=10 timeout=600 \ clone clone-max=2 clone-node-max=1 interleave=true The resulting resource should look like the following: [root]# pcs resource show SAPHanaTopology_RH1_02-clone Clone: SAPHanaTopology_RH1_02-clone Meta Attrs: clone-max=2 clone-node-max=1 interleave=true Resource: SAPHanaTopology_RH1_02 (class=ocf provider=heartbeat type=SAPHanaTopology) Attributes: SID=RH1 InstanceNumber=02 Operations: start interval=0s timeout=600 (SAPHanaTopology_RH1_02-start-interval-0s) stop interval=0s timeout=300 (SAPHanaTopology_RH1_02-stop-interval-0s) monitor interval=10 timeout=600 (SAPHanaTopology_RH1_02-monitor-interval-10s) Note The timeouts shown for the resource operations are only examples and may need to be adjusted depending on the actual SAP HANA setup (for example, large SAP HANA databases can take longer to start up, therefore the start timeout may have to be increased). Once the resource is started, you will see the collected information stored in the form of node attributes that can be viewed with the command pcs status --full . Below is an example of what attributes can look like when only SAPHanaTopology is started. [root]# pcs status --full ... Node Attributes: * Node node1: + hana_rh1_remoteHost : node2 + hana_rh1_roles : 1:P:master1::worker: + hana_rh1_site : DC1 + hana_rh1_srmode : syncmem + hana_rh1_vhost : node1 * Node node2: + hana_rh1_remoteHost : node1 + hana_rh1_roles : 1:S:master1::worker: + hana_rh1_site : DC2 + hana_rh1_srmode : syncmem + hana_rh1_vhost : node2 ... 3.5. Creating Promotable SAPHana resource The SAPHana resource agent manages the SAP HANA instances that are part of the SAP HANA Scale-Up System Replication and also monitors the status of SAP HANA System Replication. In the event of a failure of the SAP HANA primary replication instance, the SAPHana resource agent can trigger a takeover of SAP HANA System Replication based on how the resource agent parameters have been set. The SAPHana resource agent has the following attributes: Attribute Name Required? Default value Description SID yes null The SAP System Identifier (SID) of the SAP HANA installation (must be identical for all nodes). Example: RH1 InstanceNumber yes null The Instance Number of the SAP HANA installation (must be identical for all nodes). Example: 02 PREFER_SITE_TAKEOVER no null Should the resource agent prefer to switch over to the secondary instance instead of restarting the primary locally? true: do prefer takeover to the secondary site; false: do prefer restart locally; never: under no circumstances do a takeover of the other node AUTOMATED_REGISTER no false If a takeover event has occurred, should the former primary instance be registered as secondary? ("false": no, manual intervention will be needed; "true": yes, the former primary will be registered by the resource agent as secondary) DUPLICATE_PRIMARY_TIMEOUT no 7200 A time difference is needed between two primary time stamps if a dual-primary situation occurs before the cluster will react. If the time difference is less than the time gap, the cluster holds one or both instances in "WAITING" status. This is to give an administrator the chance to react to a failover. If the complete node of the former primary crashes, the former primary will be registered after the time difference has passed. If "only" the SAP HANA instance has crashed, the former primary will be registered immediately. After this registration to the new primary, all data will be overwritten by the system replication. The PREFER_SITE_TAKEOVER , AUTOMATED_REGISTER and DUPLICATE_PIMARY_TIMEOUT parameters must be set according to the requirements for availability and data protection of the SAP HANA System Replication that is managed by the HA cluster. In general, PREFER_SITE_TAKEOVER should be set to true, to allow the HA cluster to trigger a takeover in case a failure of the primary SAP HANA instance has been detected, since it usually takes less time for the new SAP HANA primary instance to become fully active than it would take for the original SAP HANA primary instance to restart and reload all data back from disk into memory. To be able to verify that all data on the new primary SAP HANA instance is correct after a takeover triggered by the HA cluster has occurred, AUTOMATED_REGISTER should be set to false. This will give an operator the possibility to either switch back to the old primary SAP HANA instance in case a takeover happened by accident, or if the takeover was correct, the old primary SAP HANA instance can be registered as the new secondary SAP HANA instance to get SAP HANA System Replication working again. If AUTOMATED_REGISTER is set to true, then an old primary SAP HANA instance will be automatically registered as the new secondary SAP HANA instance by the SAPHana resource agent after a takeover by the HA cluster has occurred. This will increase the availability of the SAP HANA System Replication setup and prevent so-called "dual-primary" situations in the SAP HANA System Replication environment. But it can potentially increase the risk of data-loss/data-corruption, because if a takeover was triggered by the HA cluster even though the data on the secondary SAP HANA instance wasn't fully in sync, then the automatic registration of the old primary SAP HANA instance as the new secondary SAP HANA instance would result in all data on this instance being deleted and therefore any data that has not been synced before the takeover occurred won't be available anymore. The promotable SAPHana cluster resource for managing the SAP HANA instances and SAP HANA System Replication can be created as in the following example: [root]# pcs resource create SAPHana_RH1_02 SAPHana SID=RH1 InstanceNumber=02 \ PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true \ op start timeout=3600 \ op stop timeout=3600 \ op monitor interval=61 role="Slave" timeout=700 \ op monitor interval=59 role="Master" timeout=700 \ op promote timeout=3600 \ op demote timeout=3600 \ promotable notify=true clone-max=2 clone-node-max=1 interleave=true The resulting HA cluster resource should look like the following: [root]# pcs resource config SAPHana_RH1_02-clone Clone: SAPHana_RH1_02-clone Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true promotable=true Resource: SAPHana_RH1_02 (class=ocf provider=heartbeat type=SAPHana) Attributes: AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=180 InstanceNumber=02 PREFER_SITE_TAKEOVER=true SID=RH1 Operations: methods interval=0s timeout=5 (SAPHana_RH1_02-methods-interval-0s) monitor interval=61 role=Slave timeout=700 (SAPHana_RH1_02-monitor-interval-61) monitor interval=59 role=Master timeout=700 (SAPHana_RH1_02-monitor-interval-59) promote interval=0s timeout=3600 (SAPHana_RH1_02-promote-interval-0s) demote interval=0s timeout=3600 (SAPHana_RH1_02-demote-interval-0s) start interval=0s timeout=3600 (SAPHana_RH1_02-start-interval-0s) stop interval=0s timeout=3600 (SAPHana_RH1_02-stop-interval-0s) Note The timeouts for the resource operations are only examples and may need to be adjusted depending on the actual SAP HANA setup (for example, large SAP HANA databases can take longer to start up, therefore the start timeout may have to be increased). Once the resource is started and the HA cluster has executed the first monitor operation, it will add additional node attributes describing the current state of SAP HANA databases on nodes, as seen below: [root]# pcs status --full ... Node Attributes: * Node node1: + hana_rh1_clone_state : PROMOTED + hana_rh1_op_mode : delta_datashipping + hana_rh1_remoteHost : node2 + hana_rh1_roles : 4:P:master1:master:worker:master + hana_rh1_site : DC1 + hana_rh1_sync_state : PRIM + hana_rh1_srmode : syncmem + hana_rh1_version : 2.00.064.00.1660047502 + hana_rh1_vhost : node1 + lpa_rh1_lpt : 1495204085 + master-SAPHana_RH1_02 : 150 * Node node2: + hana_r12_clone_state : DEMOTED + hana_rh1_op_mode : delta_datashipping + hana_rh1_remoteHost : node1 + hana_rh1_roles : 4:S:master1:master:worker:master + hana_rh1_site : DC2 + hana_rh1_srmode : syncmem + hana_rh1_sync_state : SOK + hana_rh1_version : 2.00.064.00.1660047502 + hana_rh1_vhost : node2 + lpa_rh1_lpt : 30 + master-SAPHana_RH1_02 : -INFINITY ... 3.6. Creating virtual IP address resource In order for clients to be able to access the primary SAP HANA instance independently from the HA cluster node it is currently running on, a virtual IP address is needed, which the HA cluster will enable on the node where the primary SAP HANA instance is running. To allow the HA cluster to manage the VIP, create IPaddr2 resource with IP 192.168.0.15 . [root]# pcs resource create vip_RH1_02 IPaddr2 ip="192.168.0.15" Please use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. The resulting HA cluster resource should look as follows: [root]# pcs resource show vip_RH1_02 Resource: vip_RH1_02 (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.15 Operations: start interval=0s timeout=20s (vip_RH1_02-start-interval-0s) stop interval=0s timeout=20s (vip_RH1_02-stop-interval-0s) monitor interval=10s timeout=20s (vip_RH1_02-monitor-interval-10s) 3.7. Creating constraints For correct operation, we need to ensure that SAPHanaTopology resources are started before starting SAPHana resources and also that the virtual IP address is present on the node where the primary SAP HANA instance is running. To achieve this, the following constraints are required. 3.7.1. Constraint - start SAPHanaTopology before SAPHana The example command below will create the constraint that mandates the start order of these resources. There are two things worth mentioning here: symmetrical=false attribute defines that we care only about the start of resources and they don't need to be stopped in reverse order. Both resources ( SAPHana and SAPHanaTopology ) have the attribute interleave=true that allows the parallel start of these resources on nodes. This permits that, despite setting the order constraints, we will not wait for all nodes to start SAPHanaTopology , but we can start the SAPHana resource on any of the nodes as soon as SAPHanaTopology is running there. Command for creating the constraint: [root]# pcs constraint order SAPHanaTopology_RH1_02-clone then SAPHana_RH1_02-clone symmetrical=false The resulting constraint should look like the one in the example below: [root]# pcs constraint ... Ordering Constraints: start SAPHanaTopology_RH1_02-clone then start SAPHana_RH1_02-clone (kind:Mandatory) (non-symmetrical) ... 3.7.2. Constraint - colocate the IPaddr2 resource with Master of SAPHana resource Below is an example command that will colocate the IPaddr2 resource with SAPHana resource that was promoted as Master. [root]# pcs constraint colocation add vip_RH1_02 with master SAPHana_RH1_02-clone 2000 Note that the constraint is using a score of 2000 instead of the default INFINITY. This allows the IPaddr2 resource to stay active in case there is no Master promoted in the SAPHana resource, so it is still possible to use tools like SAP Management Console (MMC) or SAP Landscape Management (LaMa) that can use this address to query the status information about the SAP Instance. The resulting constraint should look like the following: [root]# pcs constraint ... Colocation Constraints: vip_RH1_02 with SAPHana_RH1_02-clone (score:2000) (rsc-role:Started) (with-rsc-role:Master) ... 3.8. Adding a secondary virtual IP address for an Active/Active (Read-Enabled) SAP HANA System Replication setup (optional) Starting with SAP HANA 2.0 SPS1, SAP HANA supports Active/Active (Read Enabled) setups for SAP HANA System Replication, where the secondary instance of a SAP HANA System Replication setup can be used for read-only access. To be able to support such setups, a second virtual IP address is required, which enables clients to access the secondary SAP HANA instance. To ensure that the secondary replication site can still be accessed after a takeover has occurred, the HA cluster needs to move the virtual IP address around with the slave of the promotable SAPHana resource. To enable the Active/Active (Read Enabled) mode in SAP HANA, the operationMode must be set to logreplay_readaccess when registering the secondary SAP HANA instance. 3.8.1. Creating the resource for managing the secondary virtual IP address [root]# pcs resource create vip2_RH1_02 IPaddr2 ip="192.168.1.11" Please use the appropriate resource agent for managing the virtual IP address based on the platform on which the HA cluster is running. 3.8.2. Creating location constraints This is to ensure that the secondary virtual IP address is placed on the right HA cluster node. [root]# pcs constraint location vip2_RH1_02 rule score=INFINITY hana_rh1_sync_state eq SOK and hana_rh1_roles eq 4:S:master1:master:worker:master [root]# pcs constraint location vip2_RH1_02 rule score=2000 hana_rh1_sync_state eq PRIM and hana_rh1_roles eq 4:P:master1:master:worker:master These location constraints ensure that the second virtual IP resource will have the following behavior: If the primary SAP HANA instance and the secondary SAP HANA instance are both up and running, and SAP HANA System Replication is in sync, the second virtual IP will be active on the HA cluster node where the secondary SAP HANA instance is running. If the secondary SAP HANA instance is not running or the SAP HANA System Replication is not in sync, the second virtual IP will be active on the HA cluster node where the primary SAP HANA instance is running. When the secondary SAP HANA instance is running and SAP HANA System Replication is in sync again, the second virtual IP will move back to the HA cluster node where the secondary SAP HANA instance is running. If the primary SAP HANA instance is not running and a SAP HANA takeover is triggered by the HA cluster, the second virtual IP will continue running on the same node until the SAP HANA instance on the other node is registered as the new secondary and the SAP HANA System Replication is in sync again. This maximizes the time that the second virtual IP resource will be assigned to a node where a healthy SAP HANA instance is running. 3.9. Enabling the SAP HANA srServiceStateChanged() hook for hdbindexserver process failure action (optional) When HANA detects an issue with an indexserver process, it recovers it by stopping and restarting it automatically via the in-built functionality built into SAP HANA. However, in some cases, the service can take a very long time for the "stopping" phase. During that time, the System Replication may get out of sync, while HANA still proceeds to work and accept new connections. Eventually, the service completes the stop-and-restart process and recovers. Instead of waiting for this long-running restart, which poses a risk to data consistency, should anything else fail in the instance during that time, the ChkSrv.py hook script can react to the situation and stop the HANA instance for a faster recovery. In a setup with automated failover enabled, the instance stop leads to a takeover being initiated, if the secondary node is in a healthy state. Otherwise, recovery would continue locally, but the enforced instance restart would speed it up. When configured in the global.ini config file, SAP HANA calls the ChkSrv.py hook script for any events in the instance. The script processes the events and executes actions based on the results of the filters it applies to event details. This way, it can distinguish a HANA indexserver process that is being stopped-and-restarted by HANA after a failure from the same process being stopped as part of an instance shutdown. Below are the different possible actions that can be taken: Ignore: This action just writes the parsed events and decision information to a dedicated logfile, which is useful for verifying what the hook script would do. Stop: This action executes a graceful StopSystem for the instance through the sapcontrol command. Kill: This action executes the HDB kill-<signal> command with a default signal 9, which can be configured. Please note that both the stop and kill actions lead to a stopped HANA instance, with the kill being a bit faster in the end. At this point, the cluster notices the failure of the HANA resource and reacts to it in the way it has been configured; typically, it restarts the instance, and if enabled, it also takes care of a takeover. 3.9.1. Verifying the version of the resource-agents-sap-hana package Please verify that the correct version of the resource-agents-sap-hana package providing the components required to enable the srServiceStateChanged() hook for your version of RHEL 8 is installed, as documented in Pacemaker cluster does not trigger a takeover of HANA System Replication when the hdbindexserver process of the primary HANA instance hangs/crashes . 3.9.2. Activating the srServiceStateChanged() hook on all SAP HANA instances Note The steps to activate the srServiceStateChanged() hook need to be performed for each SAP HANA instance on all HA cluster nodes. Update the SAP HANA global.ini file on each node to enable use of the hook script by both SAP HANA instances (e.g., in file /hana/shared/RH1/global/hdb/custom/config/global.ini ): [ha_dr_provider_chksrv] provider = ChkSrv path = /usr/share/SAPHanaSR/srHook execution_order = 2 action_on_lost = stop [trace] ha_dr_saphanasr = info ha_dr_chksrv = info Set the optional parameters as shown below: action_on_lost (default: ignore) stop_timeout (default: 20) kill_signal (default: 9) Below is an explanation of the available options for action_on_lost : ignore : This enables the feature, but only log events. This is useful for monitoring the hook's activity in the configured environment. stop : This executes a graceful sapcontrol -nr <nr> -function StopSystem . kill : This executes HDB kill-<signal> for the fastest stop. Please note that stop_timeout is added to the command execution of the stop and kill actions, and kill_signal is used in the kill action as part of the HDB kill-<signal> command. Activate the new hook while HANA is running by reloading the HA/DR providers: [rh1adm]USD hdbnsutil -reloadHADRProviders Verify the hook initialization by checking the new trace file: [rh1adm]USD cdtrace [rh1adm]USD cat nameserver_chksrv.trc | [
"dnf install resource-agents-sap-hana",
"pcs cluster stop --all",
"[ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /usr/share/SAPHanaSR/srHook execution_order = 1 [trace] ha_dr_saphanasr = info",
"visudo -f /etc/sudoers.d/20-saphana Cmnd_Alias DC1_SOK = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC1 -v SOK -t crm_config -s SAPHanaSR Cmnd_Alias DC1_SFAIL = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC1 -v SFAIL -t crm_config -s SAPHanaSR Cmnd_Alias DC2_SOK = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SOK -t crm_config -s SAPHanaSR Cmnd_Alias DC2_SFAIL = /usr/sbin/crm_attribute -n hana_rh1_site_srHook_DC2 -v SFAIL -t crm_config -s SAPHanaSR rh1adm ALL=(ALL) NOPASSWD: DC1_SOK, DC1_SFAIL, DC2_SOK, DC2_SFAIL Defaults!DC1_SOK, DC1_SFAIL, DC2_SOK, DC2_SFAIL !requiretty",
"[rh1adm]USD HDB start",
"[rh1adm]USD cdtrace [rh1adm]USD awk '/ha_dr_SAPHanaSR.*crm_attribute/ { printf \"%s %s %s %s\\n\",USD2,USD3,USD5,USD16 }' nameserver_* 2018-05-04 12:34:04.476445 ha_dr_SAPHanaSR SFAIL 2018-05-04 12:53:06.316973 ha_dr_SAPHanaSR SOK grep ha_dr_ *",
"pcs cluster start --all",
"pcs resource defaults resource-stickiness=1000 pcs resource defaults migration-threshold=5000",
"pcs resource defaults update resource-stickiness=1000 pcs resource defaults update migration-threshold=5000",
"pcs resource create SAPHanaTopology_RH1_02 SAPHanaTopology SID=RH1 InstanceNumber=02 op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600 clone clone-max=2 clone-node-max=1 interleave=true",
"pcs resource show SAPHanaTopology_RH1_02-clone Clone: SAPHanaTopology_RH1_02-clone Meta Attrs: clone-max=2 clone-node-max=1 interleave=true Resource: SAPHanaTopology_RH1_02 (class=ocf provider=heartbeat type=SAPHanaTopology) Attributes: SID=RH1 InstanceNumber=02 Operations: start interval=0s timeout=600 (SAPHanaTopology_RH1_02-start-interval-0s) stop interval=0s timeout=300 (SAPHanaTopology_RH1_02-stop-interval-0s) monitor interval=10 timeout=600 (SAPHanaTopology_RH1_02-monitor-interval-10s)",
"pcs status --full Node Attributes: * Node node1: + hana_rh1_remoteHost : node2 + hana_rh1_roles : 1:P:master1::worker: + hana_rh1_site : DC1 + hana_rh1_srmode : syncmem + hana_rh1_vhost : node1 * Node node2: + hana_rh1_remoteHost : node1 + hana_rh1_roles : 1:S:master1::worker: + hana_rh1_site : DC2 + hana_rh1_srmode : syncmem + hana_rh1_vhost : node2",
"pcs resource create SAPHana_RH1_02 SAPHana SID=RH1 InstanceNumber=02 \\ PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true op start timeout=3600 op stop timeout=3600 op monitor interval=61 role=\"Slave\" timeout=700 op monitor interval=59 role=\"Master\" timeout=700 op promote timeout=3600 op demote timeout=3600 promotable notify=true clone-max=2 clone-node-max=1 interleave=true",
"pcs resource config SAPHana_RH1_02-clone Clone: SAPHana_RH1_02-clone Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true promotable=true Resource: SAPHana_RH1_02 (class=ocf provider=heartbeat type=SAPHana) Attributes: AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=180 InstanceNumber=02 PREFER_SITE_TAKEOVER=true SID=RH1 Operations: methods interval=0s timeout=5 (SAPHana_RH1_02-methods-interval-0s) monitor interval=61 role=Slave timeout=700 (SAPHana_RH1_02-monitor-interval-61) monitor interval=59 role=Master timeout=700 (SAPHana_RH1_02-monitor-interval-59) promote interval=0s timeout=3600 (SAPHana_RH1_02-promote-interval-0s) demote interval=0s timeout=3600 (SAPHana_RH1_02-demote-interval-0s) start interval=0s timeout=3600 (SAPHana_RH1_02-start-interval-0s) stop interval=0s timeout=3600 (SAPHana_RH1_02-stop-interval-0s)",
"pcs status --full Node Attributes: * Node node1: + hana_rh1_clone_state : PROMOTED + hana_rh1_op_mode : delta_datashipping + hana_rh1_remoteHost : node2 + hana_rh1_roles : 4:P:master1:master:worker:master + hana_rh1_site : DC1 + hana_rh1_sync_state : PRIM + hana_rh1_srmode : syncmem + hana_rh1_version : 2.00.064.00.1660047502 + hana_rh1_vhost : node1 + lpa_rh1_lpt : 1495204085 + master-SAPHana_RH1_02 : 150 * Node node2: + hana_r12_clone_state : DEMOTED + hana_rh1_op_mode : delta_datashipping + hana_rh1_remoteHost : node1 + hana_rh1_roles : 4:S:master1:master:worker:master + hana_rh1_site : DC2 + hana_rh1_srmode : syncmem + hana_rh1_sync_state : SOK + hana_rh1_version : 2.00.064.00.1660047502 + hana_rh1_vhost : node2 + lpa_rh1_lpt : 30 + master-SAPHana_RH1_02 : -INFINITY",
"pcs resource create vip_RH1_02 IPaddr2 ip=\"192.168.0.15\"",
"pcs resource show vip_RH1_02 Resource: vip_RH1_02 (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.15 Operations: start interval=0s timeout=20s (vip_RH1_02-start-interval-0s) stop interval=0s timeout=20s (vip_RH1_02-stop-interval-0s) monitor interval=10s timeout=20s (vip_RH1_02-monitor-interval-10s)",
"pcs constraint order SAPHanaTopology_RH1_02-clone then SAPHana_RH1_02-clone symmetrical=false",
"pcs constraint Ordering Constraints: start SAPHanaTopology_RH1_02-clone then start SAPHana_RH1_02-clone (kind:Mandatory) (non-symmetrical)",
"pcs constraint colocation add vip_RH1_02 with master SAPHana_RH1_02-clone 2000",
"pcs constraint Colocation Constraints: vip_RH1_02 with SAPHana_RH1_02-clone (score:2000) (rsc-role:Started) (with-rsc-role:Master)",
"pcs resource create vip2_RH1_02 IPaddr2 ip=\"192.168.1.11\"",
"pcs constraint location vip2_RH1_02 rule score=INFINITY hana_rh1_sync_state eq SOK and hana_rh1_roles eq 4:S:master1:master:worker:master pcs constraint location vip2_RH1_02 rule score=2000 hana_rh1_sync_state eq PRIM and hana_rh1_roles eq 4:P:master1:master:worker:master",
"[ha_dr_provider_chksrv] provider = ChkSrv path = /usr/share/SAPHanaSR/srHook execution_order = 2 action_on_lost = stop [trace] ha_dr_saphanasr = info ha_dr_chksrv = info",
"[rh1adm]USD hdbnsutil -reloadHADRProviders",
"[rh1adm]USD cdtrace [rh1adm]USD cat nameserver_chksrv.trc"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/asmb_config_ha_cluster_automating-sap-hana-scale-up-system-replication |
Chapter 9. Validating schemas with Service Registry | Chapter 9. Validating schemas with Service Registry You can use Red Hat Service Registry with AMQ Streams. Service Registry is a datastore for sharing standard event schemas and API designs across API and event-driven architectures. You can use Service Registry to decouple the structure of your data from your client applications, and to share and manage your data types and API descriptions at runtime using a REST interface. Service Registry stores schemas used to serialize and deserialize messages, which can then be referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas. Service Registry provides Kafka client serializers/deserializers for Kafka producer and consumer applications. Kafka producer applications use serializers to encode messages that conform to specific event schemas. Kafka consumer applications use deserializers, which validate that the messages have been serialized using the correct schema, based on a specific schema ID. You can enable your applications to use a schema from the registry. This ensures consistent schema usage and helps to prevent data errors at runtime. Additional resources Service Registry documentation Service Registry is built on the Apicurio Registry open source community project available on GitHub: Apicurio/apicurio-registry | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/service-registry-concepts-str |
Chapter 3. Enabling monitoring for user-defined projects | Chapter 3. Enabling monitoring for user-defined projects In OpenShift Container Platform 4.7, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can now monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this new feature centralizes monitoring for core platform components and user-defined projects. Note Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. 3.1. Enabling monitoring for user-defined projects Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important In OpenShift Container Platform 4.7 you must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Note You must have access to the cluster as a user with the cluster-admin role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have created the cluster-monitoring-config ConfigMap object. You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It can sometimes take a while for these components to redeploy. You can create and configure the ConfigMap object before you first enable monitoring for user-defined projects, to prevent having to redeploy the pods often. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserWorkload: true under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically. Warning When changes are saved to the cluster-monitoring-config ConfigMap object, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also be restarted. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start: USD oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h Additional resources Creating a cluster monitoring config map Configuring the monitoring stack Granting users permission to configure monitoring for user-defined projects 3.2. Granting users permission to monitor user-defined projects Cluster administrators can monitor all core OpenShift Container Platform and user-defined projects. Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the following monitoring roles: The monitoring-rules-view role provides read access to PrometheusRule custom resources for a project. The monitoring-rules-edit role grants a user permission to create, modify, and deleting PrometheusRule custom resources for a project. The monitoring-edit role grants the same privileges as the monitoring-rules-edit role. Additionally, it enables a user to create new scrape targets for services or pods. With this role, you can also create, modify, and delete ServiceMonitor and PodMonitor resources. You can also grant users permission to configure the components that are responsible for monitoring user-defined projects: The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project enables you to edit the user-workload-monitoring-config ConfigMap object. With this role, you can edit the ConfigMap object to configure Prometheus, Prometheus Operator and Thanos Ruler for user-defined workload monitoring. This section provides details on how to assign these roles by using the OpenShift Container Platform web console or the CLI. 3.2.1. Granting user permissions by using the web console You can grant users permissions to monitor their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective within the OpenShift Container Platform web console, navigate to User Management Role Bindings Create Binding . In the Binding Type section, select the "Namespace Role Binding" type. In the Name field, enter a name for the role binding. In the Namespace field, select the user-defined project where you want to grant the access. Important The monitoring role will be bound to the project that you apply in the Namespace field. The permissions that you grant to a user by using this procedure will apply only to the selected project. Select monitoring-rules-view , monitoring-rules-edit , or monitoring-edit in the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 3.2.2. Granting user permissions by using the CLI You can grant users permissions to monitor their own projects, by using the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the cluster-admin role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign a monitoring role to a user for a project: USD oc policy add-role-to-user <role> <user> -n <namespace> 1 1 Substitute <role> with monitoring-rules-view , monitoring-rules-edit , or monitoring-edit . Important Whichever role you choose, you must bind it against a specific project as a cluster administrator. As an example, substitute <role> with monitoring-edit , <user> with johnsmith , and <namespace> with ns1 . This assigns the user johnsmith permission to set up metrics collection and to create alerting rules in the ns1 namespace. 3.3. Granting users permission to configure monitoring for user-defined projects You can grant users permission to configure monitoring for user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring 3.4. Accessing metrics from outside the cluster for custom applications Learn how to query Prometheus statistics from the command line when monitoring your own services. You can access monitoring data from outside the cluster with the thanos-querier route. Prerequisites You deployed your own service, following the Enabling monitoring for user-defined projects procedure. Procedure Extract a token to connect to Prometheus: USD SECRET=`oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print USD1 }'` USD TOKEN=`echo USD(oc get secret USDSECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d` Extract your route host: USD THANOS_QUERIER_HOST=`oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host'` Query the metrics of your own services in the command line. For example: USD NAMESPACE=ns1 USD curl -X GET -kG "https://USDTHANOS_QUERIER_HOST/api/v1/query?" --data-urlencode "query=up{namespace='USDNAMESPACE'}" -H "Authorization: Bearer USDTOKEN" The output will show you the duration that your application pods have been up. Example output {"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"up","endpoint":"web","instance":"10.129.0.46:8080","job":"prometheus-example-app","namespace":"ns1","pod":"prometheus-example-app-68d47c4fb6-jztp2","service":"prometheus-example-app"},"value":[1591881154.748,"1"]}]}} 3.5. Disabling monitoring for user-defined projects After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object. Note Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Set enableUserWorkload: to false under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while: USD oc -n openshift-user-workload-monitoring get pod Example output No resources found in openshift-user-workload-monitoring project. Note The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object. 3.6. steps Managing metrics | [
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc -n openshift-user-workload-monitoring get pod",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h",
"oc policy add-role-to-user <role> <user> -n <namespace> 1",
"oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring",
"SECRET=`oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print USD1 }'`",
"TOKEN=`echo USD(oc get secret USDSECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d`",
"THANOS_QUERIER_HOST=`oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host'`",
"NAMESPACE=ns1",
"curl -X GET -kG \"https://USDTHANOS_QUERIER_HOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\" -H \"Authorization: Bearer USDTOKEN\"",
"{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[{\"metric\":{\"__name__\":\"up\",\"endpoint\":\"web\",\"instance\":\"10.129.0.46:8080\",\"job\":\"prometheus-example-app\",\"namespace\":\"ns1\",\"pod\":\"prometheus-example-app-68d47c4fb6-jztp2\",\"service\":\"prometheus-example-app\"},\"value\":[1591881154.748,\"1\"]}]}}",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false",
"oc -n openshift-user-workload-monitoring get pod",
"No resources found in openshift-user-workload-monitoring project."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/monitoring/enabling-monitoring-for-user-defined-projects |
5.2.19. /proc/meminfo | 5.2.19. /proc/meminfo This is one of the more commonly used files in the /proc/ directory, as it reports a large amount of valuable information about the systems RAM usage. The following sample /proc/meminfo virtual file is from a system with 256 MB of RAM and 512 MB of swap space: Much of the information here is used by the free , top , and ps commands. In fact, the output of the free command is similar in appearance to the contents and structure of /proc/meminfo . But by looking directly at /proc/meminfo , more details are revealed: MemTotal - Total amount of physical RAM, in kilobytes. MemFree - The amount of physical RAM, in kilobytes, left unused by the system. Buffers - The amount of physical RAM, in kilobytes, used for file buffers. Cached - The amount of physical RAM, in kilobytes, used as cache memory. SwapCached - The amount of swap, in kilobytes, used as cache memory. Active - The total amount of buffer or page cache memory, in kilobytes, that is in active use. This is memory that has been recently used and is usually not reclaimed for other purposes. Inactive - The total amount of buffer or page cache memory, in kilobytes, that are free and available. This is memory that has not been recently used and can be reclaimed for other purposes. HighTotal and HighFree - The total and free amount of memory, in kilobytes, that is not directly mapped into kernel space. The HighTotal value can vary based on the type of kernel used. LowTotal and LowFree - The total and free amount of memory, in kilobytes, that is directly mapped into kernel space. The LowTotal value can vary based on the type of kernel used. SwapTotal - The total amount of swap available, in kilobytes. SwapFree - The total amount of swap free, in kilobytes. Dirty - The total amount of memory, in kilobytes, waiting to be written back to the disk. Writeback - The total amount of memory, in kilobytes, actively being written back to the disk. Mapped - The total amount of memory, in kilobytes, which have been used to map devices, files, or libraries using the mmap command. Slab - The total amount of memory, in kilobytes, used by the kernel to cache data structures for its own use. Committed_AS - The total amount of memory, in kilobytes, estimated to complete the workload. This value represents the worst case scenario value, and also includes swap memory. PageTables - The total amount of memory, in kilobytes, dedicated to the lowest page table level. VMallocTotal - The total amount of memory, in kilobytes, of total allocated virtual address space. VMallocUsed - The total amount of memory, in kilobytes, of used virtual address space. VMallocChunk - The largest contiguous block of memory, in kilobytes, of available virtual address space. HugePages_Total - The total number of hugepages for the system. The number is derived by dividing Hugepagesize by the megabytes set aside for hugepages specified in /proc/sys/vm/hugetlb_pool . This statistic only appears on the x86, Itanium, and AMD64 architectures. HugePages_Free - The total number of hugepages available for the system. This statistic only appears on the x86, Itanium, and AMD64 architectures. Hugepagesize - The size for each hugepages unit in kilobytes. By default, the value is 4096 KB on uniprocessor kernels for 32 bit architectures. For SMP, hugemem kernels, and AMD64, the default is 2048 KB. For Itanium architectures, the default is 262144 KB. This statistic only appears on the x86, Itanium, and AMD64 architectures. | [
"MemTotal: 255908 kB MemFree: 69936 kB Buffers: 15812 kB Cached: 115124 kB SwapCached: 0 kB Active: 92700 kB Inactive: 63792 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 255908 kB LowFree: 69936 kB SwapTotal: 524280 kB SwapFree: 524280 kB Dirty: 4 kB Writeback: 0 kB Mapped: 42236 kB Slab: 25912 kB Committed_AS: 118680 kB PageTables: 1236 kB VmallocTotal: 3874808 kB VmallocUsed: 1416 kB VmallocChunk: 3872908 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 4096 kB"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-meminfo |
Chapter 3. Management of hosts using the Ceph Orchestrator | Chapter 3. Management of hosts using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to add, list, and remove hosts in an existing Red Hat Ceph Storage cluster. You can also add labels to hosts. Labels are free-form and have no specific meanings. Each host can have multiple labels. For example, apply the mon label to all hosts that have monitor daemons deployed, mgr for all hosts with manager daemons deployed, rgw for Ceph object gateways, and so on. Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph Orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels. This section covers the following administrative tasks: Adding hosts using the Ceph Orchestrator . Adding multiple hosts using the Ceph Orchestrator . Listing hosts using the Ceph Orchestrator . Adding labels to hosts using the Ceph Orchestrator . Removing a label from a host . Removing hosts using the Ceph Orchestrator . Placing hosts in the maintenance mode using the Ceph Orchestrator . 3.1. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. The IP addresses of the new hosts should be updated in /etc/hosts file. 3.2. Adding hosts using the Ceph Orchestrator You can use the Ceph Orchestrator with Cephadm in the backend to add hosts to an existing Red Hat Ceph Storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Register the nodes to the CDN and attach subscriptions. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. Procedure From the Ceph administration node, log into the Cephadm shell: Example Extract the cluster's public SSH keys to a folder: Syntax Example Copy Ceph cluster's public SSH keys to the root user's authorized_keys file on the new host: Syntax Example From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is /usr/share/cephadm-ansible/hosts . The following example shows the structure of a typical inventory file: Example Note If you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 6. Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chronyd , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. From the Ceph administration node, log into the Cephadm shell: Example Use the cephadm orchestrator to add hosts to the storage cluster: Syntax The --label option is optional and this adds the labels when adding the hosts. You can add multiple labels to the host. Example Verification List the hosts: Example Additional Resources See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . For more information about the cephadm-preflight playbook, see Running the preflight playbook section in the Red Hat Ceph Storage Installation Guide . See the Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide . See the Creating an Ansible user with sudo access section in the Red Hat Ceph Storage Installation Guide . 3.3. Setting the initial CRUSH location of host You can add the location identifier to the host which instructs cephadm to create a new CRUSH host located in the specified hierarchy. Note The location attribute only affects the initial CRUSH location. Subsequent changes of the location property is ignored. Also, removing a host does not remove any CRUSH buckets. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Edit the hosts.yaml file to include the following details: Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the hosts using service specification: Syntax Example Additional Resources See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 3.4. Adding multiple hosts using the Ceph Orchestrator You can use the Ceph Orchestrator to add multiple hosts to a Red Hat Ceph Storage cluster at the same time using the service specification in YAML file format. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Create the hosts.yaml file: Example Edit the hosts.yaml file to include the following details: Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the hosts using service specification: Syntax Example Verification List the hosts: Example Additional Resources See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 3.5. Listing hosts using the Ceph Orchestrator You can list hosts of a Ceph cluster with Ceph Orchestrators. Note The STATUS of the hosts is blank, in the output of the ceph orch host ls command. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the storage cluster. Procedure Log into the Cephadm shell: Example List the hosts of the cluster: Example You will see that the STATUS of the hosts is blank which is expected. 3.6. Adding labels to hosts using the Ceph Orchestrator You can use the Ceph Orchestrator to add labels to hosts in an existing Red Hat Ceph Storage cluster. A few examples of labels are mgr , mon , and osd based on the service deployed on the hosts. You can also add the following host labels that have special meaning to cephadm and they begin with _ : _no_schedule : This label prevents cephadm from scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causes cephadm to move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the _no_schedule label, no daemons are deployed on it. When the daemons are drained before the host is removed, the _no_schedule label is set on that host. _no_autotune_memory : This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled for one or more daemons on that host. _admin : By default, the _admin label is applied to the bootstrapped host in the storage cluster and the client.admin key is set to be distributed to that host with the ceph orch client-keyring {ls|set|rm} function. Adding this label to additional hosts normally causes cephadm to deploy configuration and keyring files in /etc/ceph directory. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the storage cluster Procedure Log into the Cephadm shell: Example Add labels to the hosts: Syntax Example Verification List the hosts: Example 3.7. Removing a label from a host You can use the Ceph orchestrator to remove a label from a host. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Launch the cephadm shell: Remove the label. Syntax Example Verification List the hosts: Example 3.8. Removing hosts using the Ceph Orchestrator You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain option which adds the _no_schedule label to ensure that you cannot deploy any daemons or a cluster till the operation is complete. Important If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the storage cluster. All the services are deployed. Cephadm is deployed on the nodes where the services have to be removed. Procedure Log into the Cephadm shell: Example Fetch the host details: Example Drain all the daemons from the host: Syntax Example The _no_schedule label is automatically applied to the host which blocks deployment. Check the status of OSD removal: Example When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster. Check if all the daemons are removed from the storage cluster: Syntax Example Remove the host: Syntax Example Additional Resources See the Adding hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 3.9. Placing hosts in the maintenance mode using the Ceph Orchestrator You can use the Ceph Orchestrator to place the hosts in and out of the maintenance mode. The ceph orch host maintenance enter command stops the systemd target which causes all the Ceph daemons to stop on the host. Similarly, the ceph orch host maintenance exit command restarts the systemd target and the Ceph daemons restart on their own. The orchestrator adopts the following workflow when the host is placed in maintenance: Confirms the removal of hosts does not impact data availability by running the orch host ok-to-stop command. If the host has Ceph OSD daemons, it applies noout to the host subtree to prevent data migration from triggering during the planned maintenance slot. Stops the Ceph target, thereby, stopping all the daemons. Disables the ceph target on the host, to prevent a reboot from automatically starting Ceph services. Exiting maintenance reverses the above sequence. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts added to the cluster. Procedure Log into the Cephadm shell: Example You can either place the host in maintenance mode or place it out of the maintenance mode: Place the host in maintenance mode: Syntax Example The --force flag allows the user to bypass warnings, but not alerts. Place the host out of the maintenance mode: Syntax Example Verification List the hosts: Example | [
"cephadm shell",
"ceph cephadm get-pub-key > ~/ PATH",
"ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2",
"ssh-copy-id -f -i ~/ceph.pub root@host02",
"host01 host02 host03 [admin] host00",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"cephadm shell",
"ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label= LABEL_NAME_1 , LABEL_NAME_2 ]",
"ceph orch host add host02 10.10.128.70 --labels=mon,mgr",
"ceph orch host ls",
"service_type: host hostname: host01 addr: 192.168.0.11 location: rack: rack1",
"cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml",
"cd /var/lib/ceph/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i hosts.yaml",
"touch hosts.yaml",
"service_type: host addr: host01 hostname: host01 labels: - mon - osd - mgr --- service_type: host addr: host02 hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: host03 hostname: host03 labels: - mon - osd",
"cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml",
"cd /var/lib/ceph/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i hosts.yaml",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label add HOST_NAME LABEL_NAME",
"ceph orch host label add host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label rm HOSTNAME LABEL",
"ceph orch host label rm host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls",
"ceph orch host drain HOSTNAME",
"ceph orch host drain host02",
"ceph orch osd rm status",
"ceph orch ps HOSTNAME",
"ceph orch ps host02",
"ceph orch host rm HOSTNAME",
"ceph orch host rm host02",
"cephadm shell",
"ceph orch host maintenance enter HOST_NAME [--force]",
"ceph orch host maintenance enter host02 --force",
"ceph orch host maintenance exit HOST_NAME",
"ceph orch host maintenance exit host02",
"ceph orch host ls"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/management-of-hosts-using-the-ceph-orchestrator |
Chapter 4. Knative Eventing | Chapter 4. Knative Eventing Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers and event consumers. Event producers create events, and event sinks , or consumers, receive events. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications , which enables creating, parsing, sending, and receiving events in any programming language. Knative Eventing supports the following use cases: Publish an event without creating a consumer You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events. Consume an event without creating a publisher You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST. To enable delivery to multiple types of sinks, Knative Eventing defines the following generic interfaces that can be implemented by multiple Kubernetes resources: Addressable resources Able to receive and acknowledge an event delivered over HTTP to an address defined in the status.address.url field of the event. The Kubernetes Service resource also satisfies the addressable interface. Callable resources Able to receive an event delivered over HTTP and transform it, returning 0 or 1 new events in the HTTP response payload. These returned events may be further processed in the same way that events from an external event source are processed. 4.1. Using the Knative broker for Apache Kafka The Knative broker implementation for Apache Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities. Knative broker for Apache Kafka provides additional options, such as: Kafka source Kafka channel Kafka broker Kafka sink 4.2. Additional resources Installing the KnativeKafka custom resource Red Hat AMQ Streams documentation Red Hat AMQ Streams TLS and SASL on Apache Kafka documentation Event delivery | null | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/about_openshift_serverless/about-knative-eventing |
22.16.2. Configure Rate Limiting Access to an NTP Service | 22.16.2. Configure Rate Limiting Access to an NTP Service To enable rate limiting access to the NTP service running on a system, add the limited option to the restrict command as explained in Section 22.16.1, "Configure Access Control to an NTP Service" . If you do not want to use the default discard parameters, then also use the discard command as explained here. The discard command takes the following form: discard [ average value ] [ minimum value ] [ monitor value ] average - specifies the minimum average packet spacing to be permitted, it accepts an argument in log 2 seconds. The default value is 3 ( 2 3 equates to 8 seconds). minimum - specifies the minimum packet spacing to be permitted, it accepts an argument in log 2 seconds. The default value is 1 ( 2 1 equates to 2 seconds). monitor - specifies the discard probability for packets once the permitted rate limits have been exceeded. The default value is 3000 seconds. This option is intended for servers that receive 1000 or more requests per second. Examples of the discard command are as follows: discard average 4 discard average 4 minimum 2 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_Configure_Rate_Limiting_Access_to_an_NTP_Service |
Chapter 2. Configuring a GCP project | Chapter 2. Configuring a GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 2.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 2.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 2.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 2.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 2.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 2.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 2.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Compute Global 11 1 Forwarding rules Compute Global 2 0 In-use global IP addresses Compute Global 4 1 Health checks Compute Global 3 0 Images Compute Global 1 0 Networks Compute Global 2 0 Static IP addresses Compute Region 4 1 Routers Compute Global 1 0 Routes Compute Global 2 0 Subnetworks Compute Global 2 0 Target pools Compute Global 3 0 CPUs Compute Region 28 4 Persistent disk SSD (GB) Compute Region 896 128 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 2.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. You must have a service account key or a virtual machine with an attached service account to create the cluster. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. Additional resources See Manually creating IAM for more details about using manual credentials mode. 2.5.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create a service account with the following permissions. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin IAM Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using passthrough credentials mode Compute Load Balancer Admin IAM Role Viewer The roles are applied to the service accounts that the control plane and compute machines use: Table 2.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 2.5.2. Required GCP permissions for installer-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the installer-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. For more information, see "Required roles for using passthrough credentials mode" in the "Required GCP roles" section. Example 2.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 2.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 2.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list Example 2.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 2.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 2.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 2.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 2.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 2.10. Required IAM permissions for installation iam.roles.get Example 2.11. Optional Images permissions for installation compute.images.list Example 2.12. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 2.13. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 2.14. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 2.15. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 2.16. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.17. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 2.18. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 2.19. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 2.20. Required Images permissions for deletion compute.images.list 2.5.3. Required GCP permissions for shared VPC installations When you are installing a cluster to a shared VPC , you must configure the service account for both the host project and the service project. If you are not installing to a shared VPC, you can skip this section. You must apply the minimum roles required for a standard installation as listed above, to the service project. Important You can use granular permissions for a Cloud Credential Operator that operates in either manual or mint credentials mode. You cannot use granular permissions in passthrough credentials mode. Ensure that the host project applies one of the following configurations to the service account: Example 2.21. Required permissions for creating firewalls in the host project projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkAdmin roles/compute.securityAdmin Example 2.22. Required minimal permissions projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkUser 2.6. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 2.7. steps Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_gcp/installing-gcp-account |
Chapter 20. Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) | Chapter 20. Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) introduces Red Hat Enterprise Linux 7.5 user space with an updated kernel, which is based on version 4.14 and is provided by the kernel-alt packages. The offering is distributed with other updated packages but most of it is the standard Red Hat Enterprise Linux 7 Server RPMs. Installation ISO images are available on the Customer Portal Downloads page . For information about Red Hat Enterprise Linux 7.5 installation and user space, see the Installation Guide and other Red Hat Enterprise Linux 7 documentation . For information regarding the version, refer to Red Hat Enterprise Linux 7.4 for IBM Power LE (POWER9) - Release Notes. Note Bare metal installations on IBM Power LE using a USB drive require you to specify the inst.stage2= boot option manually at the boot menu. See the Boot Options chapter in the Installation Guide for detailed information. 20.1. New Features and Updates Virtualization KVM virtualization is now supported on IBM POWER9 systems. However, due to hardware differences, certain features and functionalities differ from what is supported on AMD64 and Intel 64 systems. For details, see the Virtualization Deployment and Administration Guide . Platform Tools OProfile now includes support for the IBM POWER9 processor. Note that the PM_RUN_INST_CMPL OProfile performance monitoring event cannot be setup and should not be used in this version of OProfile . (BZ#1463290) This update adds support for the IBM POWER9 performance monitoring hardware events to papi . It includes basic PAPI presets for events, such as instructions ( PAPI_TOT_INS ) or processor cycles ( PAPI_TOT_CYC ). (BZ#1463291) This version of libpfm includes support for the IBM POWER9 performance monitoring hardware events. (BZ#1463292) SystemTap includes backported compatibility fixes necessary for the kernel. Previously, the memcpy() function from the GNU C Library ( glibc ) used unaligned vector load and store instructions on 64-bit IBM POWER systems. Consequently, when memcpy() was used to access device memory on POWER9 systems, performance would suffer. The memcpy() function has been enhanced to use aligned memory access instructions, to provide better performance for applications regardless of the memory involved on POWER9, without affecting the performance on generations of the POWER architecture. (BZ#1498925) Security USBGuard is now available as a Technology Preview on IBM Power LE (POWER9) The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. USBGuard is now available as a Technology Preview on IBM Power LE (POWER9). Note that USB is not supported on IBM Z, and the USBGuard framework cannot be provided on those systems. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/chap-red_hat_enterprise_linux-7.5_release_notes-rhel_for_ibm_power9 |
3.2. C++ HotRod Client on RHEL 5 | 3.2. C++ HotRod Client on RHEL 5 The C++ HotRod Client on Red Hat Enterprise Linux (RHEL) 5 has been deprecated in JBoss Data Grid 6.6.0, and is expected to be removed in version 7.0.0; the C++ HotRod Client for RHEL 6 and RHEL 7 will continue to be supported in JDG 7.0.0. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/c_hotrod_client_on_rhel_5 |
9.11. Other Security Resources | 9.11. Other Security Resources For more information about designing a secure directory, see the following: Understanding and Deploying LDAP Directory Services. T. Howes, M. Smith, G. Good, Macmillan Technical Publishing, 1999. SecurityFocus.com http://www.securityfocus.com Computer Emergency Response Team (CERT) Coordination Center http://www.cert.org | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_a_secure_directory-other_security_resources |
About This Reference | About This Reference Red Hat Directory Server (Directory Server) is a powerful and scalable distributed directory server based on the industry-standard Lightweight Directory Access Protocol (LDAP). Directory Server is the cornerstone for building a centralized and distributed data repository that can be used in an intranet, over an extranet with trading partners, or over the public Internet to reach customers. This reference covers the server configuration and the command-line utilities. It is designed primarily for directory administrators and experienced directory users who want to use the command-line to access the directory. After configuring the server, use this reference to help maintain it. The Directory Server can also be managed through the Directory Server Console, a graphical user interface. The Red Hat Directory Server Administration Guide describes how to do this and explains individual administration tasks more fully. 1. Directory Server Overview The major components of Directory Server include: An LDAP server - The LDAP v3-compliant network daemon. Directory Server Console - A graphical management console that dramatically reduces the effort of setting up and maintaining your directory service. SNMP agent - Can monitor the Directory Server using the Simple Network Management Protocol (SNMP). | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/about_this_reference |
Chapter 18. Log Record Fields | Chapter 18. Log Record Fields The following fields can be present in log records exported by the logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings. To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch /_search URL , to look for a Kubernetes pod name, use /_search/q=kubernetes.pod_name:name-of-my-pod . The top level fields may be present in every record. message The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty structured field is present. See the description of structured for more. Data type text Example value HAPPY structured Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the message field will contain the original log message. The structured field can have any subfields that are included in the log message, there are no restrictions defined here. Data type group Example value map[message:starting fluentd worker pid=21631 ppid=21618 worker=0 pid:21631 ppid:21618 worker:0] @timestamp A UTC value that marks when the log payload was created or, if the creation time is not known, when the log payload was first collected. The "@" prefix denotes a field that is reserved for a particular use. By default, most tools look for "@timestamp" with ElasticSearch. Data type date Example value 2015-01-24 14:06:05.071000000 Z hostname The name of the host where this log message originated. In a Kubernetes cluster, this is the same as kubernetes.host . Data type keyword ipaddr4 The IPv4 address of the source server. Can be an array. Data type ip ipaddr6 The IPv6 address of the source server, if available. Can be an array. Data type ip level The logging level from various sources, including rsyslog(severitytext property) , a Python logging module, and others. The following values come from syslog.h , and are preceded by their numeric equivalents : 0 = emerg , system is unusable. 1 = alert , action must be taken immediately. 2 = crit , critical conditions. 3 = err , error conditions. 4 = warn , warning conditions. 5 = notice , normal but significant condition. 6 = info , informational. 7 = debug , debug-level messages. The two following values are not part of syslog.h but are widely used: 8 = trace , trace-level messages, which are more verbose than debug messages. 9 = unknown , when the logging system gets a value it does not recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from python logging , you can match CRITICAL with crit , ERROR with err , and so on. Data type keyword Example value info pid The process ID of the logging entity, if available. Data type keyword service The name of the service associated with the logging entity, if available. For example, syslog's APP-NAME and rsyslog's programname properties are mapped to the service field. Data type keyword | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/cluster-logging-exported-fields |
Release Notes for .NET 6.0 RPM packages | Release Notes for .NET 6.0 RPM packages .NET 6.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_rpm_packages/index |
5.4.2. Creating Striped Volumes | 5.4.2. Creating Striped Volumes For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O. For general information about striped volumes, see Section 3.3.2, "Striped Logical Volumes" . When you create a striped logical volume, you specify the number of stripes with the -i argument of the lvcreate command. This determines over how many physical volumes the logical volume will be striped. The number of stripes cannot be greater than the number of physical volumes in the volume group (unless the --alloc anywhere argument is used). If the underlying physical devices that make up a striped logical volume are different sizes, the maximum size of the striped volume is determined by the smallest underlying device. For example, in a two-legged stripe, the maximum size is twice the size of the smaller device. In a three-legged stripe, the maximum size is three times the size of the smallest device. The following command creates a striped logical volume across 2 physical volumes with a stripe of 64kB. The logical volume is 50 gigabytes in size, is named gfslv , and is carved out of volume group vg0 . As with linear volumes, you can specify the extents of the physical volume that you are using for the stripe. The following command creates a striped volume 100 extents in size that stripes across two physical volumes, is named stripelv and is in volume group testvg . The stripe will use sectors 0-49 of /dev/sda1 and sectors 50-99 of /dev/sdb1 . | [
"lvcreate -L 50G -i2 -I64 -n gfslv vg0",
"lvcreate -l 100 -i2 -nstripelv testvg /dev/sda1:0-49 /dev/sdb1:50-99 Using default stripesize 64.00 KB Logical volume \"stripelv\" created"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/LV_stripecreate |
Appendix B. Using KVM Virtualization on Multiple Architectures | Appendix B. Using KVM Virtualization on Multiple Architectures By default, KVM virtualization on Red Hat Enterprise Linux 7 is compatible with the AMD64 and Intel 64 architectures. However, starting with Red Hat Enterprise Linux 7.5, KVM virtualization is also supported on the following architectures, thanks to the introduction of the kernel-alt packages: IBM POWER IBM Z ARM systems (not supported) Note that when using virtualization on these architectures, the installation, usage, and feature support differ from AMD64 and Intel 64 in certain respects. For more information, see the sections below: B.1. Using KVM Virtualization on IBM POWER Systems Starting with Red Hat Enterprise Linux 7.5, KVM virtualization is supported on IBM POWER8 Systems and IBM POWER9 systems. However, IBM POWER8 does not use kernel-alt , which means that these two architectures differ in certain aspects. Installation To install KVM virtualization on Red Hat Enterprise Linux 7 for IBM POWER 8 and POWER9 Systems: Install the host system from the bootable image on the Customer Portal: IBM POWER8 IBM POWER9 For detailed instructions, see the Red Hat Enterprise Linux 7 Installation Guide . Ensure that your host system meets the hypervisor requirements: Verify that you have the correct machine type: The output of this command must include the PowerNV entry, which indicates that you are running on a supported PowerNV machine type: Load the KVM-HV kernel module: Verify that the KVM-HV kernel module is loaded: If KVM-HV was loaded successfully, the output of this command includes kvm_hv . Install the qemu-kvm-ma package in addition to other virtualization packages described in Chapter 2, Installing the Virtualization Packages . Architecture Specifics KVM virtualization on Red Hat Enterprise Linux 7.5 for IBM POWER differs from KVM on AMD64 and Intel 64 systems in the following: The recommended minimum memory allocation for a guest on an IBM POWER host is 2GB RAM . The SPICE protocol is not supported on IBM POWER systems. To display the graphical output of a guest, use the VNC protocol. In addition, only the following virtual graphics card devices are supported: vga - only supported in -vga std mode and not in -vga cirrus mode virtio-vga virtio-gpu The following virtualization features are disabled on AMD64 and Intel 64 hosts, but work on IBM POWER. However, they are not supported by Red Hat, and therefore not recommended for use: I/O threads SMBIOS configuration is not available. POWER8 guests, including compatibility mode guests, may fail to start with an error similar to: This is significantly more likely to occur on guests that use Red Hat Enterprise Linux 7.3 or prior. To fix this problem, increase the CMA memory pool available for the guest's hashed page table (HPT) by adding kvm_cma_resv_ratio= memory to the host's kernel command line, where memory is the percentage of host memory that should be reserved for the CMA pool (defaults to 5). Transparent huge pages (THPs) currently do not provide any notable performance benefits on IBM POWER8 guests Also note that the sizes of static huge pages on IBM POWER8 systems are 16MiB and 16GiB, as opposed to 2MiB and 1GiB on AMD64 and Intel 64 and on IBM POWER9. As a consequence, migrating a guest from an IBM POWER8 host to an IBM POWER9 host fails if the guest is configured with static huge pages. In addition, to be able to use static huge pages or THPs on IBM POWER8 guests, you must first set up huge pages on the host . A number of virtual peripheral devices that are supported on AMD64 and Intel 64 systems are not supported on IBM POWER systems, or a different device is supported as a replacement: Devices used for PCI-E hierarchy, including the ioh3420 and xio3130-downstream devices, are not supported. This functionality is replaced by multiple independent PCI root bridges, provided by the spapr-pci-host-bridge device. UHCI and EHCI PCI controllers are not supported. Use OHCI and XHCI controllers instead. IDE devices, including the virtual IDE CD-ROM ( ide-cd ) and the virtual IDE disk ( ide-hd ), are not supported. Use the virtio-scsi and virtio-blk devices instead. Emulated PCI NICs ( rtl8139 ) are not supported. Use the virtio-net device instead. Sound devices, including intel-hda , hda-output , and AC97 , are not supported. USB redirection devices, including usb-redir and usb-tablet , are not supported. The kvm-clock service does not have to be configured for time management on IBM POWER systems. The pvpanic device is not supported on IBM POWER systems. However, an equivalent functionality is available and activated on this architecture by default. To enable it on a guest, use the <on_crash> configuration element with the preserve value. In addition, make sure to remove the <panic> element from the <devices> section, as its presence can lead to the guest failing to boot on IBM POWER systems. On IBM POWER8 systems, the host machine must run in single-threaded mode to support guests. This is automatically configured if the qemu-kvm-ma packages are installed. However, guests running on single-threaded hosts can still use multiple threads. When an IBM POWER virtual machine (VM) running on a RHEL 7 host is configured with a NUMA node that uses zero memory ( memory='0' ), the VM does not work correctly. As a consequence, Red Hat does not support IBM POWER VMs with zero-memory NUMA nodes on RHEL 7 | [
"grep ^platform /proc/cpuinfo",
"platform : PowerNV",
"modprobe kvm_hv",
"lsmod | grep kvm",
"qemu-kvm: Failed to allocate KVM HPT of order 33 (try smaller maxmem?): Cannot allocate memory"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/appe-kvm_on_multiarch |
E.3. Sample Metadata | E.3. Sample Metadata The following shows an example of LVM volume group metadata for a volume group called myvg . | [
"Generated by LVM2: Tue Jan 30 16:28:15 2007 contents = \"Text Format Volume Group\" version = 1 description = \"Created *before* executing 'lvextend -L+5G /dev/myvg/mylv /dev/sdc'\" creation_host = \"tng3-1\" # Linux tng3-1 2.6.18-8.el5 #1 SMP Fri Jan 26 14:15:21 EST 2007 i686 creation_time = 1170196095 # Tue Jan 30 16:28:15 2007 myvg { id = \"0zd3UT-wbYT-lDHq-lMPs-EjoE-0o18-wL28X4\" seqno = 3 status = [\"RESIZEABLE\", \"READ\", \"WRITE\"] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 physical_volumes { pv0 { id = \"ZBW5qW-dXF2-0bGw-ZCad-2RlV-phwu-1c1RFt\" device = \"/dev/sda\" # Hint only status = [\"ALLOCATABLE\"] dev_size = 35964301 # 17.1491 Gigabytes pe_start = 384 pe_count = 4390 # 17.1484 Gigabytes } pv1 { id = \"ZHEZJW-MR64-D3QM-Rv7V-Hxsa-zU24-wztY19\" device = \"/dev/sdb\" # Hint only status = [\"ALLOCATABLE\"] dev_size = 35964301 # 17.1491 Gigabytes pe_start = 384 pe_count = 4390 # 17.1484 Gigabytes } pv2 { id = \"wCoG4p-55Ui-9tbp-VTEA-jO6s-RAVx-UREW0G\" device = \"/dev/sdc\" # Hint only status = [\"ALLOCATABLE\"] dev_size = 35964301 # 17.1491 Gigabytes pe_start = 384 pe_count = 4390 # 17.1484 Gigabytes } pv3 { id = \"hGlUwi-zsBg-39FF-do88-pHxY-8XA2-9WKIiA\" device = \"/dev/sdd\" # Hint only status = [\"ALLOCATABLE\"] dev_size = 35964301 # 17.1491 Gigabytes pe_start = 384 pe_count = 4390 # 17.1484 Gigabytes } } logical_volumes { mylv { id = \"GhUYSF-qVM3-rzQo-a6D2-o0aV-LQet-Ur9OF9\" status = [\"READ\", \"WRITE\", \"VISIBLE\"] segment_count = 2 segment1 { start_extent = 0 extent_count = 1280 # 5 Gigabytes type = \"striped\" stripe_count = 1 # linear stripes = [ \"pv0\", 0 ] } segment2 { start_extent = 1280 extent_count = 1280 # 5 Gigabytes type = \"striped\" stripe_count = 1 # linear stripes = [ \"pv1\", 0 ] } } } }"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/meta_example |
Introduction to OpenShift Dedicated | Introduction to OpenShift Dedicated OpenShift Dedicated 4 An overview of OpenShift Dedicated architecture Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/introduction_to_openshift_dedicated/index |
probe::signal.handle | probe::signal.handle Name probe::signal.handle - Signal handler being invoked Synopsis signal.handle Values name Name of the probe point sig The signal number that invoked the signal handler sinfo The address of the siginfo table ka_addr The address of the k_sigaction table associated with the signal sig_mode Indicates whether the signal was a user-mode or kernel-mode signal sig_code The si_code value of the siginfo signal regs The address of the kernel-mode stack area (deprecated in SystemTap 2.1) oldset_addr The address of the bitmask array of blocked signals (deprecated in SystemTap 2.1) sig_name A string representation of the signal | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-handle |
Chapter 5. Managing images | Chapter 5. Managing images 5.1. Managing images overview With OpenShift Container Platform you can interact with images and set up image streams, depending on where the registries of the images are located, any authentication requirements around those registries, and how you want your builds and deployments to behave. 5.1.1. Images overview An image stream comprises any number of container images identified by tags. It presents a single virtual view of related images, similar to a container image repository. By watching an image stream, builds and deployments can receive notifications when new images are added or modified and react by performing a build or deployment, respectively. 5.2. Tagging images The following sections provide an overview and instructions for using image tags in the context of container images for working with OpenShift Container Platform image streams and their tags. 5.2.1. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 5.2.2. Image tag conventions Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built. If there is too much information embedded in a tag name, like v2.0.1-may-2019 , the tag points to just one revision of an image and is never updated. Using default image pruning options, such an image is never removed. In very large clusters, the schema of creating new tags for every revised image could eventually fill up the etcd datastore with excess tag metadata for images that are long outdated. If the tag is named v2.0 , image revisions are more likely. This results in longer tag history and, therefore, the image pruner is more likely to remove old and unused images. Although tag naming convention is up to you, here are a few examples in the format <image_name>:<image_tag> : Table 5.1. Image tag naming conventions Description Example Revision myimage:v2.0.1 Architecture myimage:v2.0-x86_64 Base image myimage:v1.2-centos7 Latest (potentially unstable) myimage:latest Latest stable myimage:stable If you require dates in tag names, periodically inspect old and unsupported images and istags and remove them. Otherwise, you can experience increasing resource usage caused by retaining old images. 5.2.3. Adding tags to image streams An image stream in OpenShift Container Platform comprises zero or more container images identified by tags. There are different types of tags available. The default behavior uses a permanent tag, which points to a specific image in time. If the permanent tag is in use and the source changes, the tag does not change for the destination. A tracking tag means the destination tag's metadata is updated during the import of the source tag. Procedure You can add tags to an image stream using the oc tag command: USD oc tag <source> <destination> For example, to configure the ruby image stream static-2.0 tag to always refer to the current image for the ruby image stream 2.0 tag: USD oc tag ruby:2.0 ruby:static-2.0 This creates a new image stream tag named static-2.0 in the ruby image stream. The new tag directly references the image id that the ruby:2.0 image stream tag pointed to at the time oc tag was run, and the image it points to never changes. To ensure the destination tag is updated when the source tag changes, use the --alias=true flag: USD oc tag --alias=true <source> <destination> Note Use a tracking tag for creating permanent aliases, for example, latest or stable . The tag only works correctly within a single image stream. Trying to create a cross-image stream alias produces an error. You can also add the --scheduled=true flag to have the destination tag be refreshed, or re-imported, periodically. The period is configured globally at the system level. The --reference flag creates an image stream tag that is not imported. The tag points to the source location, permanently. If you want to instruct OpenShift Container Platform to always fetch the tagged image from the integrated registry, use --reference-policy=local . The registry uses the pull-through feature to serve the image to the client. By default, the image blobs are mirrored locally by the registry. As a result, they can be pulled more quickly the time they are needed. The flag also allows for pulling from insecure registries without a need to supply --insecure-registry to the container runtime as long as the image stream has an insecure annotation or the tag has an insecure import policy. 5.2.4. Removing tags from image streams You can remove tags from an image stream. Procedure To remove a tag completely from an image stream run: USD oc delete istag/ruby:latest or: USD oc tag -d ruby:latest 5.2.5. Referencing images in imagestreams You can use tags to reference images in image streams using the following reference types. Table 5.2. Imagestream reference types Reference type Description ImageStreamTag An ImageStreamTag is used to reference or retrieve an image for a given image stream and tag. ImageStreamImage An ImageStreamImage is used to reference or retrieve an image for a given image stream and image sha ID. DockerImage A DockerImage is used to reference or retrieve an image for a given external registry. It uses standard Docker pull specification for its name. When viewing example image stream definitions you may notice they contain definitions of ImageStreamTag and references to DockerImage , but nothing related to ImageStreamImage . This is because the ImageStreamImage objects are automatically created in OpenShift Container Platform when you import or tag an image into the image stream. You should never have to explicitly define an ImageStreamImage object in any image stream definition that you use to create image streams. Procedure To reference an image for a given image stream and tag, use ImageStreamTag : To reference an image for a given image stream and image sha ID, use ImageStreamImage : The <id> is an immutable identifier for a specific image, also called a digest. To reference or retrieve an image for a given external registry, use DockerImage : Note When no tag is specified, it is assumed the latest tag is used. You can also reference a third-party registry: Or an image with a digest: 5.3. Image pull policy Each container in a pod has a container image. Once you have created an image and pushed it to a registry, you can then refer to it in the pod. 5.3.1. Image pull policy overview When OpenShift Container Platform creates containers, it uses the container imagePullPolicy to determine if the image should be pulled prior to starting the container. There are three possible values for imagePullPolicy : Table 5.3. imagePullPolicy values Value Description Always Always pull the image. IfNotPresent Only pull the image if it does not already exist on the node. Never Never pull the image. If a container imagePullPolicy parameter is not specified, OpenShift Container Platform sets it based on the image tag: If the tag is latest , OpenShift Container Platform defaults imagePullPolicy to Always . Otherwise, OpenShift Container Platform defaults imagePullPolicy to IfNotPresent . 5.4. Using image pull secrets If you are using the OpenShift Container Platform internal registry and are pulling from image streams located in the same project, then your pod service account should already have the correct permissions and no additional action should be required. However, for other scenarios, such as referencing images across OpenShift Container Platform projects or from secured registries, then additional configuration steps are required. You can obtain the image pull secret from the Red Hat OpenShift Cluster Manager . This pull secret is called pullSecret . You use this pull secret to authenticate with the services that are provided by the included authorities, Quay.io and registry.redhat.io , which serve the container images for OpenShift Container Platform components. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.4.1. Allowing pods to reference images across projects When using the internal registry, to allow pods in project-a to reference images in project-b , a service account in project-a must be bound to the system:image-puller role in project-b . Note When you create a pod service account or a namespace, wait until the service account is provisioned with a docker pull secret; if you create a pod before its service account is fully provisioned, the pod fails to access the OpenShift Container Platform internal registry. Procedure To allow pods in project-a to reference images in project-b , bind a service account in project-a to the system:image-puller role in project-b : USD oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-a:default \ --namespace=project-b After adding that role, the pods in project-a that reference the default service account are able to pull images from project-b . To allow access for any service account in project-a , use the group: USD oc policy add-role-to-group \ system:image-puller system:serviceaccounts:project-a \ --namespace=project-b 5.4.2. Allowing pods to reference images from other secured registries The .dockercfg USDHOME/.docker/config.json file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry. To pull a secured container image that is not from OpenShift Container Platform's internal registry, you must create a pull secret from your Docker credentials and add it to your service account. Procedure If you already have a .dockercfg file for the secured registry, you can create a secret from that file by running: USD oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<path/to/.dockercfg> \ --type=kubernetes.io/dockercfg Or if you have a USDHOME/.docker/config.json file: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default : USD oc secrets link default <pull_secret_name> --for=pull 5.4.2.1. Pulling from private registries with delegated authentication A private registry can delegate authentication to a separate service. In these cases, image pull secrets must be defined for both the authentication and registry endpoints. Procedure Create a secret for the delegated authentication server: USD oc create secret docker-registry \ --docker-server=sso.redhat.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ redhat-connect-sso secret/redhat-connect-sso Create a secret for the private registry: USD oc create secret docker-registry \ --docker-server=privateregistry.example.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ private-registry secret/private-registry 5.4.3. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. Important To transfer your cluster to another owner, you must first initiate the transfer in OpenShift Cluster Manager , and then update the pull secret on the cluster. Updating a cluster's pull secret without initiating the transfer in OpenShift Cluster Manager causes the cluster to stop reporting Telemetry metrics in OpenShift Cluster Manager. For more information about transferring cluster ownership , see "Transferring cluster ownership" in the Red Hat OpenShift Cluster Manager documentation. Warning Cluster resources must adjust to the new pull secret, which can temporarily limit the usability of the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. | [
"registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2",
"oc tag <source> <destination>",
"oc tag ruby:2.0 ruby:static-2.0",
"oc tag --alias=true <source> <destination>",
"oc delete istag/ruby:latest",
"oc tag -d ruby:latest",
"<image_stream_name>:<tag>",
"<image_stream_name>@<id>",
"openshift/ruby-20-centos7:2.0",
"registry.redhat.io/rhel7:latest",
"centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b",
"oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b",
"oc create secret generic <pull_secret_name> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>",
"oc secrets link default <pull_secret_name> --for=pull",
"oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso",
"oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/managing-images |
Chapter 1. JBoss EAP XP upgrades | Chapter 1. JBoss EAP XP upgrades 1.1. Upgrades and migrations Use the steps outlined in the JBoss EAP XP 4.0 upgrade and migration guide to prepare, upgrade, and migrate your JBoss EAP XP 3.0.x product to the JBoss EAP XP 4.0.0 product. JBoss EAP XP 4.0.0 is compatible with only JBoss EAP 7.4. If you operate servers on JBoss EAP 7.3 and you want to apply the JBoss EAP XP 4.0.0 patch on it, you must first upgrade your JBoss EAP 7.3 instance to JBoss EAP 7.4. The guide references tools that you can use for the upgrading and migration process. These tools are as follows: Migration Toolkit for Applications (MTA) JBoss Server Migration Tool After you successfully upgrade and migrate JBoss EAP XP 3.0.x release to JBoss EAP XP 4.0.0, you can begin to implement any applications migrations for your JBoss EAP 7.4 instance. Additional resources For information about archiving applications that you plan to migrate to JBoss EAP XP 4.0.0, see Back Up Important Data and Review Server State in the Migration Guide . 1.2. Preparation for upgrade and migration After you upgrade the JBoss EAP Expansion Pack, you might have to update application code. For JBoss EAP XP 4.0.0, some backward compatibility might exist for JBoss EAP XP 3.0.x applications. However, if your application uses features that were deprecated or functionality that was removed from JBoss EAP XP 4.0.0, you might need to make changes to your application code. Please review the following new items before you begin the migration process: JBoss EAP XP features added in the JBoss EAP XP 4.0.0 release. MicroProfile capabilities added in the JBoss EAP XP 4.0.0. Enhancements to existing MicroProfile capabilities. Capabilities and features that are deprecated in the JBoss EAP XP 4.0.0. Capabilities and features that have been removed from JBoss EAP XP 4.0.0. Tools that you can use to migrate from one EAP XP release to another release. After you have reviewed the listed items, analyze your environment and plan for the upgrade process and migration process. Ensure you back up any applications that you plan to migrate to JBoss EAP XP 4.0.0. You can now upgrade your current JBoss EAP XP 3.0.x release to JBoss EAP XP 4.0.0. You can implement any applications migrations after the upgrade process Additional resources For information about archiving applications that you plan to migrate to JBoss EAP XP 4.0.0, see Back Up Important Data and Review Server State in the Migration Guide . 1.3. New JBoss EAP XP capabilities The JBoss EAP XP 4.0.0 includes new features that enhance the use of Red Hat implementation of the MicroProfile specification for JBoss EAP applications. Note The MicroProfile Reactive Messaging subsystem supports Red Hat AMQ Streams. This feature implements the MicroProfile Reactive Messaging 2.0.1 API and Red Hat provides the feature as a technology preview for JBoss EAP XP 4.0.0. Red Hat tested Red Hat AMQ Streams 2021.Q4 on JBoss EAP. However, check the Red Hat JBoss Enterprise Application Platform supported configurations page for information about the latest Red Hat AMQ Streams version that has been tested on JBoss EAP XP 4.0.0. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . JBoss EAP XP 4.0.0 includes the following new features in its release: Run CLI scripts after you have started your application. Use the --cli-script=<path to CLI script> argument to update the server configuration of a bootable JAR file at runtime. Use the MicroProfile Reactive Messaging 1.0 API to send and receive messages between microservices. Use the MicroProfile Reactive Messaging 1.0 API to write and configure a user application, so the application can send, receive, and process event streams efficiently and asynchronously. Enable MicroProfile Reactive Messaging functionality in your server configuration, as MicroProfile Reactive Messaging only comes pre-installed on your server. View the MicroProfile Reactive Messaging with MicroProfile Reactive Messaging with Kafka quickstart to learn how you can complete the following tasks on your server: Enable the MicroProfile Reactive Messaging subsystem. Run and test applications by using MicroProfile Reactive Messaging to send data and receive data from Red Hat AMQ Streams. Additional resources For information about Red Hat AMQ Streams, see Overview of AMQ Streams in the Using AMQ Streams on OpenShift guide. For information about Technology Preview features. see Technology Preview Features Support Scope on the Red Hat Customer Portal. For information on the Red Hat AMQ Streams versions, see Red Hat AMQ on the Product Documentation page. For more information about the MicroProfile Reactive Messaging with Kafka quickstart, see jboss-eap-quickstarts and select the listed MicroProfile Reactive Messaging with Kafka quickstart. 1.4. Enhancements to MicroProfile capabilities The JBoss EAP XP 4.0.0 release includes support for the following MicroProfile 4.1 components: MicroProfile Config MicroProfile Fault Tolerance MicroProfile Health MicroProfile JWT MicroProfile Metrics MicroProfile OpenAPI MicroProfile OpenTracing MicroProfile REST Client Additional resources For more information about MicroProfile 4.1 and its specifications, see MicroProfile 4.1 on GitHub . For more information about MicroProfile 4.1 specification components, see About JBoss EAP XP in the Using JBoss EAP XP 4.0.0 guide. 1.5. Deprecated and unsupported MicroProfile capabilities Before you migrate your application to JBoss EAP XP 4.0.0 be aware that some features that were available in JBoss EAP XP 3.0.x might be deprecated or no longer supported. Red Hat removed support for some technologies due to the high maintenance cost, low community interest, and much better alternative solutions. Ensure that you review the Red Hat JBoss EAP XP 4.0.0 Release Notes guide and the 7.4.0 Release Notes guide for any unsupported and deprecated features. Additional resources For more information about any unsupported and deprecated features for JBoss EAP XP 4.0.0, see unsupported features and deprecated features sections in the release notes . For more information about any unsupported and deprecated features for JBoss EAP 7.4, see the 7.4.0 Release Notes guide. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/jboss_eap_xp_4.0_upgrade_and_migration_guide/expansion-pack-migration-guide_default |
23.13. Events Configuration | 23.13. Events Configuration Using the following sections of domain XML it is possible to override the default actions for various events: <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <on_lockfailure>poweroff</on_lockfailure> Figure 23.23. Events Configuration The following collections of elements allow the actions to be specified when a guest virtual machine operating system triggers a life cycle operation. A common use case is to force a reboot to be treated as a power off when doing the initial operating system installation. This allows the VM to be re-configured for the first post-install boot up. The components of this section of the domain XML are as follows: Table 23.9. Event configuration elements State Description <on_poweroff> Specifies the action that is to be executed when the guest virtual machine requests a power off. Four arguments are possible: destroy - This action terminates the domain completely and releases all resources. restart - This action terminates the domain completely and restarts it with the same configuration. preserve - This action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - This action terminates the domain completely and then restarts it with a new name. <on_reboot> Specifies the action to be executed when the guest virtual machine requests a reboot. Four arguments are possible: destroy - This action terminates the domain completely and releases all resources. restart - This action terminates the domain completely and restarts it with the same configuration. preserve - This action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - This action terminates the domain completely and then restarts it with a new name. <on_crash> Specifies the action that is to be executed when the guest virtual machine crashes. In addition, it supports these additional actions: coredump-destroy - The crashed domain's core is dumped, the domain is terminated completely, and all resources are released. coredump-restart - The crashed domain's core is dumped, and the domain is restarted with the same configuration settings. Four arguments are possible: destroy - This action terminates the domain completely and releases all resources. restart - This action terminates the domain completely and restarts it with the same configuration. preserve - This action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - This action terminates the domain completely and then restarts it with a new name. <on_lockfailure> Specifies the action to take when a lock manager loses resource locks. The following actions are recognized by libvirt, although not all of them need to be supported by individual lock managers. When no action is specified, each lock manager will take its default action. The following arguments are possible: poweroff - Forcefully powers off the domain. restart - Restarts the domain to reacquire its locks. pause - Pauses the domain so that it can be manually resumed when lock issues are solved. ignore - Keeps the domain running as if nothing happened. | [
"<on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <on_lockfailure>poweroff</on_lockfailure>"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-events_configuration |
Chapter 20. Upgrading Streams for Apache Kafka and Kafka | Chapter 20. Upgrading Streams for Apache Kafka and Kafka Upgrade your Kafka cluster with no downtime. Streams for Apache Kafka 2.9 supports and uses Apache Kafka version 3.9.0. Kafka 3.8.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9. You upgrade to the latest supported version of Kafka when you install the latest version of Streams for Apache Kafka. 20.1. Upgrade prerequisites Before you begin the upgrade process, make sure you are familiar with any upgrade changes described in the Streams for Apache Kafka 2.9 on Red Hat Enterprise Linux Release Notes . Note Refer to the documentation supporting a specific version of Streams for Apache Kafka for information on how to upgrade to that version. 20.2. Streams for Apache Kafka upgrade paths Two upgrade paths are available for Streams for Apache Kafka. Incremental upgrade An incremental upgrade involves upgrading Streams for Apache Kafka from the minor version to version 2.9. Multi-version upgrade A multi-version upgrade involves upgrading an older version of Streams for Apache Kafka to version 2.9 within a single upgrade, skipping one or more intermediate versions. For example, you might wish to upgrade from one LTS version to the LTS version, slipping intermediate releases. The upgrade process is the same for either path, you just need to make sure that the inter.broker.protocol.version is switched to the newer version. 20.3. Updating Kafka versions Upgrading Kafka when using ZooKeeper for cluster management requires updates to the Kafka version ( Kafka.spec.kafka.version ) and its inter-broker protocol version ( inter.broker.protocol.version ) in the configuration of the Kafka resource. Each version of Kafka has a compatible version of the inter-broker protocol. The inter-broker protocol is used for inter-broker communication. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table. The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config . The following table shows the differences between Kafka versions: Table 20.1. Kafka version differences Streams for Apache Kafka version Kafka version Inter-broker protocol version Log message format version ZooKeeper version 2.9 3.9.0 3.9 3.9 3.8.4 2.8 3.8.0 3.8 3.8 3.8.4 Kafka 3.9.0 is supported for production use. Kafka 3.8.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9. Log message format version When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with. The properties used to set a specific message format version are as follows: message.format.version property for topics log.message.format.version property for Kafka brokers From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don't need to be set. The values reflect the Kafka version used. When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version . Otherwise, you can set the message format version based on the Kafka version you are upgrading to. The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration. Rolling updates from Kafka version changes The Cluster Operator initiates rolling updates to Kafka brokers when the Kafka version is updated. Further rolling updates depend on the configuration for inter.broker.protocol.version and log.message.format.version . If Kafka.spec.kafka.config contains... The Cluster Operator initiates... Both the inter.broker.protocol.version and the log.message.format.version . A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version . Changing each will trigger a further rolling update. Either the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. No configuration for the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. Important From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. The log.message.format.version property for brokers and the message.format.version property for topics are deprecated and will be removed in a future release of Kafka. As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper. A single rolling update occurs even if the ZooKeeper version is unchanged. Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version. 20.4. Strategies for upgrading clients Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved. Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable. If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions. 20.5. Upgrading Kafka brokers and ZooKeeper Upgrade Kafka brokers and ZooKeeper on a host machine to use the latest version of Streams for Apache Kafka. You update the installation files, then configure and restart all Kafka brokers to use a new inter-broker protocol version. After performing these steps, data is transmitted between the Kafka brokers using the new inter-broker protocol version. For this setup, Kafka is installed in the /opt/kafka/ directory. Note From Kafka 3.0.0, message format version values are assumed to match the inter.broker.protocol.version and don't need to be set. The values reflect the Kafka version used. Prerequisites You are logged in to Red Hat Enterprise Linux as the Kafka user. You have installed Kafka and other Kafka components you are using on separate hosts. For more information, see Section 3.1, "Installation environment" . You have downloaded the installation files . Procedure For each Kafka broker in your Streams for Apache Kafka cluster and one at a time: Download the Streams for Apache Kafka archive from the Streams for Apache Kafka software downloads page . Note If prompted, log in to your Red Hat account. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-kafka-bin.zip file. mkdir /tmp/kafka unzip amq-streams-<version>-kafka-bin.zip -d /tmp/kafka If running, stop ZooKeeper and the Kafka broker running on the host. ./bin/zookeeper-server-stop.sh ./bin/kafka-server-stop.sh jcmd | grep zookeeper jcmd | grep kafka If you are running Kafka on a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Delete the libs and bin directories from your existing installation: rm -rf /opt/kafka/libs /opt/kafka/bin Copy the libs and bin directories from the temporary directory: cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/ If required, update the configuration files in the config directory to reflect any changes in the new versions. Delete the temporary directory. rm -r /tmp/kafka Edit the ./config/server.properties properties file. Set the inter.broker.protocol.version and log.message.format.version properties to the current version. For example, the current version is 3.8 if upgrading from Kafka version 3.8.0 to 3.9.0: inter.broker.protocol.version=3.8 log.message.format.version=3.8 Use the correct version for the Kafka version you are upgrading from ( 3.7 , 3.8 , and so on). Leaving the inter.broker.protocol.version unchanged at the current setting ensures that the brokers can continue to communicate with each other throughout the upgrade. If the properties are not configured, add them with the current version. If you are upgrading from Kafka 3.0.0 or later, you only need to set the inter.broker.protocol.version . Restart the updated ZooKeeper and Kafka broker: ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties ./bin/kafka-server-start.sh -daemon ./config/server.properties The Kafka broker and ZooKeeper start using the binaries for the latest Kafka version. For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Verify that the restarted Kafka broker has caught up with the partition replicas it is following. Use the kafka-topics.sh tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics . In the steps, update your Kafka brokers to use the new inter-broker protocol version. Update each broker, one at a time. Warning Downgrading Streams for Apache Kafka is not possible after completing the following steps. Set the inter.broker.protocol.version property to 3.9 in the ./config/server.properties file: inter.broker.protocol.version=3.9 On the command line, stop the Kafka broker that you modified: ./bin/kafka-server-stop.sh Check that Kafka is not running: jcmd | grep kafka Restart the Kafka broker that you modified: ./bin/kafka-server-start.sh -daemon ./config/server.properties Check that Kafka is running: jcmd | grep kafka If you are upgrading from a version earlier than Kafka 3.0.0, set the log.message.format.version property to 3.9 in the ./config/server.properties file: log.message.format.version=3.9 On the command line, stop the Kafka broker that you modified: ./bin/kafka-server-stop.sh Check that Kafka is not running: jcmd | grep kafka Restart the Kafka broker that you modified: ./bin/kafka-server-start.sh -daemon ./config/server.properties Check that Kafka is running: jcmd | grep kafka Verify that the restarted Kafka broker has caught up with the partition replicas it is following. Use the kafka-topics.sh tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics . If it was used in the upgrade, remove the legacy log.message.format.version configuration from the server.properties file. Upgrading client applications Ensure all Kafka client applications are updated to use the new version of the client binaries as part of the upgrade process and verify their compatibility with the Kafka upgrade. If needed, coordinate with the team responsible for managing the client applications. Tip To check that a client is using the latest message format, use the kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec metric. The metric shows 0 if the latest message format is being used. 20.6. Upgrading Kafka components Upgrade Kafka components on a host machine to use the latest version of Streams for Apache Kafka. You can use the Streams for Apache Kafka installation files to upgrade the following components: Kafka Connect MirrorMaker Kafka Bridge (separate ZIP file) For this setup, Kafka is installed in the /opt/kafka/ directory. Prerequisites You are logged in to Red Hat Enterprise Linux as the Kafka user. You have downloaded the installation files . You have upgraded Kafka . If a Kafka component is running on the same host as Kafka, you'll also need to stop and start Kafka when upgrading. Procedure For each host running an instance of the Kafka component: Download the Streams for Apache Kafka or Kafka Bridge installation files from the Streams for Apache Kafka software downloads page . Note If prompted, log in to your Red Hat account. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-kafka-bin.zip file. mkdir /tmp/kafka unzip amq-streams-<version>-kafka-bin.zip -d /tmp/kafka For Kafka Bridge, extract the amq-streams-<version>-bridge-bin.zip file. If running, stop the Kafka component running on the host. Delete the libs and bin directories from your existing installation: rm -rf ./libs ./bin Copy the libs and bin directories from the temporary directory: cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/ If required, update the configuration files in the config directory to reflect any changes in the new versions. Delete the temporary directory. rm -r /tmp/kafka Start the Kafka component using the appropriate script and properties files. Starting Kafka Connect in standalone mode ./bin/connect-standalone.sh \ ./config/connect-standalone.properties <connector1> .properties [ <connector2> .properties ...] Starting Kafka Connect in distributed mode ./bin/connect-distributed.sh \ ./config/connect-distributed.properties Starting MirrorMaker 2 in dedicated mode ./bin/connect-mirror-maker.sh \ ./config/connect-mirror-maker.properties Starting Kafka Bridge ./bin/kafka_bridge_run.sh \ --config-file= <path> /application.properties Verify that the Kafka component is running, and producing or consuming data as expected. Verifying Kafka Connect in standalone mode is running jcmd | grep ConnectStandalone Verifying Kafka Connect in distributed mode is running jcmd | grep ConnectDistributed Verifying MirrorMaker 2 in dedicated mode is running jcmd | grep mirrorMaker Verifying Kafka Bridge is running by checking the log HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092 | [
"mkdir /tmp/kafka unzip amq-streams-<version>-kafka-bin.zip -d /tmp/kafka",
"./bin/zookeeper-server-stop.sh ./bin/kafka-server-stop.sh jcmd | grep zookeeper jcmd | grep kafka",
"rm -rf /opt/kafka/libs /opt/kafka/bin",
"cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/",
"rm -r /tmp/kafka",
"inter.broker.protocol.version=3.8 log.message.format.version=3.8",
"./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties ./bin/kafka-server-start.sh -daemon ./config/server.properties",
"inter.broker.protocol.version=3.9",
"./bin/kafka-server-stop.sh",
"jcmd | grep kafka",
"./bin/kafka-server-start.sh -daemon ./config/server.properties",
"jcmd | grep kafka",
"log.message.format.version=3.9",
"./bin/kafka-server-stop.sh",
"jcmd | grep kafka",
"./bin/kafka-server-start.sh -daemon ./config/server.properties",
"jcmd | grep kafka",
"mkdir /tmp/kafka unzip amq-streams-<version>-kafka-bin.zip -d /tmp/kafka",
"rm -rf ./libs ./bin",
"cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/",
"rm -r /tmp/kafka",
"./bin/connect-standalone.sh ./config/connect-standalone.properties <connector1> .properties [ <connector2> .properties ...]",
"./bin/connect-distributed.sh ./config/connect-distributed.properties",
"./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties",
"./bin/kafka_bridge_run.sh --config-file= <path> /application.properties",
"jcmd | grep ConnectStandalone",
"jcmd | grep ConnectDistributed",
"jcmd | grep mirrorMaker",
"HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-upgrade-str |
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1] | Chapter 11. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the dnsRecord. status object status is the most recently observed status of the dnsRecord. 11.1.1. .spec Description spec is the specification of the desired behavior of the dnsRecord. Type object Required dnsManagementPolicy dnsName recordTTL recordType targets Property Type Description dnsManagementPolicy string dnsManagementPolicy denotes the current policy applied on the DNS record. Records that have policy set as "Unmanaged" are ignored by the ingress operator. This means that the DNS record on the cloud provider is not managed by the operator, and the "Published" status condition will be updated to "Unknown" status, since it is externally managed. Any existing record on the cloud provider can be deleted at the discretion of the cluster admin. This field defaults to Managed. Valid values are "Managed" and "Unmanaged". dnsName string dnsName is the hostname of the DNS record recordTTL integer recordTTL is the record TTL in seconds. If zero, the default is 30. RecordTTL will not be used in AWS regions Alias targets, but will be used in CNAME targets, per AWS API contract. recordType string recordType is the DNS record type. For example, "A" or "CNAME". targets array (string) targets are record targets. 11.1.2. .status Description status is the most recently observed status of the dnsRecord. Type object Property Type Description observedGeneration integer observedGeneration is the most recently observed generation of the DNSRecord. When the DNSRecord is updated, the controller updates the corresponding record in each managed zone. If an update for a particular zone fails, that failure is recorded in the status condition for the zone so that the controller can determine that it needs to retry the update for that specific zone. zones array zones are the status of the record in each zone. zones[] object DNSZoneStatus is the status of a record within a specific zone. 11.1.3. .status.zones Description zones are the status of the record in each zone. Type array 11.1.4. .status.zones[] Description DNSZoneStatus is the status of a record within a specific zone. Type object Property Type Description conditions array conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. conditions[] object DNSZoneCondition is just the standard condition fields. dnsZone object dnsZone is the zone where the record is published. 11.1.5. .status.zones[].conditions Description conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. Type array 11.1.6. .status.zones[].conditions[] Description DNSZoneCondition is just the standard condition fields. Type object Required status type Property Type Description lastTransitionTime string message string reason string status string type string 11.1.7. .status.zones[].dnsZone Description dnsZone is the zone where the record is published. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 11.2. API endpoints The following API endpoints are available: /apis/ingress.operator.openshift.io/v1/dnsrecords GET : list objects of kind DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords DELETE : delete collection of DNSRecord GET : list objects of kind DNSRecord POST : create a DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} DELETE : delete a DNSRecord GET : read the specified DNSRecord PATCH : partially update the specified DNSRecord PUT : replace the specified DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status GET : read status of the specified DNSRecord PATCH : partially update status of the specified DNSRecord PUT : replace status of the specified DNSRecord 11.2.1. /apis/ingress.operator.openshift.io/v1/dnsrecords HTTP method GET Description list objects of kind DNSRecord Table 11.1. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty 11.2.2. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords HTTP method DELETE Description delete collection of DNSRecord Table 11.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNSRecord Table 11.3. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty HTTP method POST Description create a DNSRecord Table 11.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.5. Body parameters Parameter Type Description body DNSRecord schema Table 11.6. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 202 - Accepted DNSRecord schema 401 - Unauthorized Empty 11.2.3. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} Table 11.7. Global path parameters Parameter Type Description name string name of the DNSRecord HTTP method DELETE Description delete a DNSRecord Table 11.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNSRecord Table 11.10. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNSRecord Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.12. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNSRecord Table 11.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.14. Body parameters Parameter Type Description body DNSRecord schema Table 11.15. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty 11.2.4. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status Table 11.16. Global path parameters Parameter Type Description name string name of the DNSRecord HTTP method GET Description read status of the specified DNSRecord Table 11.17. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNSRecord Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.19. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNSRecord Table 11.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.21. Body parameters Parameter Type Description body DNSRecord schema Table 11.22. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/dnsrecord-ingress-operator-openshift-io-v1 |
6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest | 6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest You can use I/O scheduling on a Red Hat Enterprise Linux guest virtual machine, regardless of the hypervisor on which the guest is running. The following is a list of benefits and issues that should be considered: Red Hat Enterprise Linux guests often benefit greatly from using the noop scheduler. The scheduler merges small requests from the guest operating system into larger requests before sending the I/O to the hypervisor. This enables the hypervisor to process the I/O requests more efficiently, which can significantly improve the guest's I/O performance. Depending on the workload I/O and how storage devices are attached, schedulers like deadline can be more beneficial than noop . Red Hat recommends performance testing to verify which scheduler offers the best performance impact. Guests that use storage accessed by iSCSI, SR-IOV, or physical device passthrough should not use the noop scheduler. These methods do not allow the host to optimize I/O requests to the underlying physical device. Note In virtualized environments, it is sometimes not beneficial to schedule I/O on both the host and guest layers. If multiple guests use storage on a file system or block device managed by the host operating system, the host may be able to schedule I/O more efficiently because it is aware of requests from all guests. In addition, the host knows the physical layout of storage, which may not map linearly to the guests' virtual storage. All scheduler tuning should be tested under normal operating conditions, as synthetic benchmarks typically do not accurately compare performance of systems using shared resources in virtual environments. 6.2.1. Configuring the I/O Scheduler for Red Hat Enterprise Linux 7 The default scheduler used on a Red Hat Enterprise Linux 7 system is deadline . However, on a Red Hat Enterprise Linux 7 guest machine, it may be beneficial to change the scheduler to noop , by doing the following: In the /etc/default/grub file, change the elevator=deadline string on the GRUB_CMDLINE_LINUX line to elevator=noop . If there is no elevator= string, add elevator=noop at the end of the line. The following shows the /etc/default/grub file after a successful change. Rebuild the /boot/grub2/grub.cfg file. On a BIOS-based machine: On an UEFI-based machine: | [
"cat /etc/default/grub [...] GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=vg00/lvroot rhgb quiet elevator=noop\" [...]",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-IO-Scheduler-Guest |
Chapter 2. Installing a cluster on IBM Power | Chapter 2. Installing a cluster on IBM Power In OpenShift Container Platform version 4.16, you can install a cluster on IBM Power(R) infrastructure that you provision. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.16 on the following IBM(R) hardware: IBM Power(R)9 or IBM Power(R)10 processor-based systems Note Support for RHCOS functionality for all IBM Power(R)8 models, IBM Power(R) AC922, IBM Power(R) IC922, and IBM Power(R) LC922 is deprecated in OpenShift Container Platform 4.16. Red Hat recommends that you use later hardware models. Hardware requirements Six logical partitions (LPARs) across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or Power10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Recommended IBM Power system requirements Hardware requirements Six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or IBM Power(R)10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Virtualized by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) 2.9.1. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Simultaneous multithreading (SMT) is not supported. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 2.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program (without an architecture postfix) runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 2.12.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.12.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none 2.12.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform version 4.16, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.16.1.1. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.19. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_power/installing-ibm-power |
Chapter 2. The 64k page size kernel | Chapter 2. The 64k page size kernel kernel-64k is an additional, optional 64-bit ARM architecture kernel package that supports 64k pages. This additional kernel exists alongside the RHEL 9 for ARM kernel which supports 4k pages. Optimal system performance directly relates to different memory configuration requirements. These requirements are addressed by the two variants of kernel, each suitable for different workloads. RHEL 9 on 64-bit ARM hardware thus offers two MMU page sizes: 4k pages kernel for efficient memory usage in smaller environments, kernel-64k for workloads with large, contiguous memory working sets. The 4k pages kernel and kernel-64k do not differ in the user experience as the user space is the same. You can choose the variant that addresses your situation the best. 4k pages kernel Use 4k pages for more efficient memory usage in smaller environments, such as those in Edge and lower-cost, small cloud instances. In these environments, increasing the physical system memory amounts is not practical due to space, power, and cost constraints. Also, not all 64-bit ARM architecture processors support a 64k page size. The 4k pages kernel supports graphical installation using Anaconda, system or cloud image-based installations, as well as advanced installations using Kickstart. kernel-64k The 64k page size kernel is a useful option for large datasets on ARM platforms. kernel-64k is suitable for memory-intensive workloads as it has significant gains in overall system performance, namely in large database, HPC, and high network performance. You must choose page size on 64-bit ARM architecture systems at the time of installation. You can install kernel-64k only by Kickstart by adding the kernel-64k package to the package list in the Kickstart file. Additional resources Installing Kernel-64k on ARM | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/what-is-kernel-64k_managing-monitoring-and-updating-the-kernel |
Chapter 4. Serving and chatting with your new model | Chapter 4. Serving and chatting with your new model You must deploy the model to your machine by serving the model. This deploys the model and makes the model available for interacting and chatting. 4.1. Serving the new model To interact with your new model, you must activate the model in a machine through serving. The ilab model serve command starts a vLLM server that allows you to chat with the model. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You customized your taxonomy tree, ran synthetic data generation, trained, and evaluated your new model. You need root user access on your machine. Procedure You can serve the model by running the following command: USD ilab model serve --model-path <path-to-best-performed-checkpoint> where: <path-to-best-performed-checkpoint> Specify the full path to the checkpoint you built after training. Your new model is the best performed checkpoint with its file path displayed after training. Example command: USD ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945/ Important Ensure you have a slash / at the end of your model path. Example output of the ilab model serve command USD ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> INFO 2024-03-02 02:21:11,352 lab.py:201 Using model /home/example-user/.local/share/instructlab/checkpoints/hf_format/checkpoint_1945 with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server. 4.2. Chatting with the new model You can chat with your model that has been trained with your data. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You customized your taxonomy tree, ran synthetic data generated, trained and evaluated your new model. You served your checkpoint model. You need root user access on your machine. Procedure Since you are serving the model in one terminal window, you must open a new terminal window to chat with the model. To chat with your new model, run the following command: USD ilab model chat --model <path-to-best-performed-checkpoint-file> where: <path-to-best-performed-checkpoint-file> Specify the new model checkpoint file you built after training. Your new model is the best performed checkpoint with its file path displayed after training. Example command: USD ilab model chat --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945 Example output of the InstructLab chatbot USD ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ CHECKPOINT_1945 (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default] Type exit to leave the chatbot. | [
"ilab model serve --model-path <path-to-best-performed-checkpoint>",
"ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945/",
"ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> INFO 2024-03-02 02:21:11,352 lab.py:201 Using model /home/example-user/.local/share/instructlab/checkpoints/hf_format/checkpoint_1945 with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.",
"ilab model chat --model <path-to-best-performed-checkpoint-file>",
"ilab model chat --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945",
"ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ CHECKPOINT_1945 (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/generating_a_custom_llm_using_rhel_ai/serving_chatting_new_model |
Chapter 33. InlineLogging schema reference | Chapter 33. InlineLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the InlineLogging type from ExternalLogging . It must have the value inline for the type InlineLogging . Property Description type Must be inline . string loggers A Map from logger name to logger level. map | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-inlinelogging-reference |
Chapter 7. Security Profiles Operator | Chapter 7. Security Profiles Operator 7.1. Security Profiles Operator overview OpenShift Container Platform Security Profiles Operator (SPO) provides a way to define secure computing ( seccomp ) profiles and SELinux profiles as custom resources, synchronizing profiles to every node in a given namespace. For the latest updates, see the release notes . The SPO can distribute custom resources to each node while a reconciliation loop ensures that the profiles stay up-to-date. See Understanding the Security Profiles Operator . The SPO manages SELinux policies and seccomp profiles for namespaced workloads. For more information, see Enabling the Security Profiles Operator . You can create seccomp and SELinux profiles, bind policies to pods, record workloads, and synchronize all worker nodes in a namespace. Use Advanced Security Profile Operator tasks to enable the log enricher, configure webhooks and metrics, or restrict profiles to a single namespace. Troubleshoot the Security Profiles Operator as needed, or engage Red Hat support . You can Uninstall the Security Profiles Operator by removing the profiles before removing the Operator. 7.2. Security Profiles Operator release notes The Security Profiles Operator provides a way to define secure computing ( seccomp ) and SELinux profiles as custom resources, synchronizing profiles to every node in a given namespace. These release notes track the development of the Security Profiles Operator in OpenShift Container Platform. For an overview of the Security Profiles Operator, see xref:[Security Profiles Operator Overview]. 7.2.1. Security Profiles Operator 0.8.6 The following advisory is available for the Security Profiles Operator 0.8.6: RHBA-2024:10380 - OpenShift Security Profiles Operator update This update includes upgraded dependencies in underlying base images. 7.2.2. Security Profiles Operator 0.8.5 The following advisory is available for the Security Profiles Operator 0.8.5: RHBA-2024:5016 - OpenShift Security Profiles Operator bug fix update 7.2.2.1. Bug fixes When attempting to install the Security Profile Operator from the web console, the option to enable Operator-recommended cluster monitoring was unavailable for the namespace. With this update, you can now enabled Operator-recommend cluster monitoring in the namespace. ( OCPBUGS-37794 ) Previously, the Security Profiles Operator would intermittently be not visible in the OperatorHub, which caused limited access to install the Operator via the web console. With this update, the Security Profiles Operator is present in the OperatorHub. 7.2.3. Security Profiles Operator 0.8.4 The following advisory is available for the Security Profiles Operator 0.8.4: RHBA-2024:4781 - OpenShift Security Profiles Operator bug fix update This update addresses CVEs in underlying dependencies. 7.2.3.1. New features and enhancements You can now specify a default security profile in the image attribute of a ProfileBinding object by setting a wildcard. For more information, see Binding workloads to profiles with ProfileBindings (SELinux) and Binding workloads to profiles with ProfileBindings (Seccomp) . 7.2.4. Security Profiles Operator 0.8.2 The following advisory is available for the Security Profiles Operator 0.8.2: RHBA-2023:5958 - OpenShift Security Profiles Operator bug fix update 7.2.4.1. Bug fixes Previously, SELinuxProfile objects did not inherit custom attributes from the same namespace. With this update, the issue has now been resolved and SELinuxProfile object attributes are inherited from the same namespace as expected. ( OCPBUGS-17164 ) Previously, RawSELinuxProfiles would hang during the creation process and would not reach an Installed state. With this update, the issue has been resolved and RawSELinuxProfiles are created successfully. ( OCPBUGS-19744 ) Previously, patching the enableLogEnricher to true would cause the seccompProfile log-enricher-trace pods to be stuck in a Pending state. With this update, log-enricher-trace pods reach an Installed state as expected. ( OCPBUGS-22182 ) Previously, the Security Profiles Operator generated high cardinality metrics, causing Prometheus pods using high amounts of memory. With this update, the following metrics will no longer apply in the Security Profiles Operator namespace: rest_client_request_duration_seconds rest_client_request_size_bytes rest_client_response_size_bytes ( OCPBUGS-22406 ) 7.2.5. Security Profiles Operator 0.8.0 The following advisory is available for the Security Profiles Operator 0.8.0: RHBA-2023:4689 - OpenShift Security Profiles Operator bug fix update 7.2.5.1. Bug fixes Previously, while trying to install Security Profiles Operator in a disconnected cluster, the secure hashes provided were incorrect due to a SHA relabeling issue. With this update, the SHAs provided work consistently with disconnected environments. ( OCPBUGS-14404 ) 7.2.6. Security Profiles Operator 0.7.1 The following advisory is available for the Security Profiles Operator 0.7.1: RHSA-2023:2029 - OpenShift Security Profiles Operator bug fix update 7.2.6.1. New features and enhancements Security Profiles Operator (SPO) now automatically selects the appropriate selinuxd image for RHEL 8- and 9-based RHCOS systems. Important Users that mirror images for disconnected environments must mirror both selinuxd images provided by the Security Profiles Operator. You can now enable memory optimization inside of an spod daemon. For more information, see Enabling memory optimization in the spod daemon . Note SPO memory optimization is not enabled by default. The daemon resource requirements are now configurable. For more information, see Customizing daemon resource requirements . The priority class name is now configurable in the spod configuration. For more information, see Setting a custom priority class name for the spod daemon pod . 7.2.6.2. Deprecated and removed features The default nginx-1.19.1 seccomp profile is now removed from the Security Profiles Operator deployment. 7.2.6.3. Bug fixes Previously, a Security Profiles Operator (SPO) SELinux policy did not inherit low-level policy definitions from the container template. If you selected another template, such as net_container, the policy would not work because it required low-level policy definitions that only existed in the container template. This issue occurred when the SPO SELinux policy attempted to translate SELinux policies from the SPO custom format to the Common Intermediate Language (CIL) format. With this update, the container template appends to any SELinux policies that require translation from SPO to CIL. Additionally, the SPO SELinux policy can inherit low-level policy definitions from any supported policy template. ( OCPBUGS-12879 ) Known issue When uninstalling the Security Profiles Operator, the MutatingWebhookConfiguration object is not deleted and must be manually removed. As a workaround, delete the MutatingWebhookConfiguration object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator . ( OCPBUGS-4687 ) 7.2.7. Security Profiles Operator 0.5.2 The following advisory is available for the Security Profiles Operator 0.5.2: RHBA-2023:0788 - OpenShift Security Profiles Operator bug fix update This update addresses a CVE in an underlying dependency. Known issue When uninstalling the Security Profiles Operator, the MutatingWebhookConfiguration object is not deleted and must be manually removed. As a workaround, delete the MutatingWebhookConfiguration object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator . ( OCPBUGS-4687 ) 7.2.8. Security Profiles Operator 0.5.0 The following advisory is available for the Security Profiles Operator 0.5.0: RHBA-2022:8762 - OpenShift Security Profiles Operator bug fix update Known issue When uninstalling the Security Profiles Operator, the MutatingWebhookConfiguration object is not deleted and must be manually removed. As a workaround, delete the MutatingWebhookConfiguration object after uninstalling the Security Profiles Operator. These steps are defined in Uninstalling the Security Profiles Operator . ( OCPBUGS-4687 ) 7.3. Security Profiles Operator support 7.3.1. Security Profiles Operator lifecycle The Security Profiles Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 7.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 7.4. Understanding the Security Profiles Operator OpenShift Container Platform administrators can use the Security Profiles Operator to define increased security measures in clusters. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. 7.4.1. About Security Profiles Security profiles can increase security at the container level in your cluster. Seccomp security profiles list the syscalls a process can make. Permissions are broader than SELinux, enabling users to restrict operations system-wide, such as write . SELinux security profiles provide a label-based system that restricts the access and usage of processes, applications, or files in a system. All files in an environment have labels that define permissions. SELinux profiles can define access within a given structure, such as directories. 7.5. Enabling the Security Profiles Operator Before you can use the Security Profiles Operator, you must ensure the Operator is deployed in the cluster. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. Important The Security Profiles Operator only supports x86_64 architecture. 7.5.1. Installing the Security Profiles Operator Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Security Profiles Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-security-profiles namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Security Profiles Operator is installed in the openshift-security-profiles namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-security-profiles project that are reporting issues. 7.5.2. Installing the Security Profiles Operator using the CLI Prerequisites You must have admin privileges. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-security-profiles labels: openshift.io/cluster-monitoring: "true" Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: security-profiles-operator namespace: openshift-security-profiles Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: security-profiles-operator-sub namespace: openshift-security-profiles spec: channel: release-alpha-rhel-8 installPlanApproval: Automatic name: security-profiles-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-security-profiles namespace, or the namespace where the Security Profiles Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the following CSV file: USD oc get csv -n openshift-security-profiles Verify that the Security Profiles Operator is operational by running the following command: USD oc get deploy -n openshift-security-profiles 7.5.3. Configuring logging verbosity The Security Profiles Operator supports the default logging verbosity of 0 and an enhanced verbosity of 1 . Procedure To enable enhanced logging verbosity, patch the spod configuration and adjust the value by running the following command: USD oc -n openshift-security-profiles patch spod \ spod --type=merge -p '{"spec":{"verbosity":1}}' Example output securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched 7.6. Managing seccomp profiles Create and manage seccomp profiles and bind them to workloads. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. 7.6.1. Creating seccomp profiles Use the SeccompProfile object to create profiles. SeccompProfile objects can restrict syscalls within a container, limiting the access of your application. Procedure Create a project by running the following command: USD oc new-project my-namespace Create the SeccompProfile object: apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: profile1 spec: defaultAction: SCMP_ACT_LOG The seccomp profile will be saved in /var/lib/kubelet/seccomp/operator/<namespace>/<name>.json . An init container creates the root directory of the Security Profiles Operator to run the Operator without root group or user ID privileges. A symbolic link is created from the rootless profile storage /var/lib/openshift-security-profiles to the default seccomp root path inside of the kubelet root /var/lib/kubelet/seccomp/operator . 7.6.2. Applying seccomp profiles to a pod Create a pod to apply one of the created profiles. Procedure Create a pod object that defines a securityContext : apiVersion: v1 kind: Pod metadata: name: test-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 View the profile path of the seccompProfile.localhostProfile attribute by running the following command: USD oc -n my-namespace get seccompprofile profile1 --output wide Example output NAME STATUS AGE SECCOMPPROFILE.LOCALHOSTPROFILE profile1 Installed 14s operator/my-namespace/profile1.json View the path to the localhost profile by running the following command: USD oc get sp profile1 --output=jsonpath='{.status.localhostProfile}' Example output operator/my-namespace/profile1.json Apply the localhostProfile output to the patch file: spec: template: spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json Apply the profile to any other workload, such as a Deployment object, by running the following command: USD oc -n my-namespace patch deployment myapp --patch-file patch.yaml --type=merge Example output deployment.apps/myapp patched Verification Confirm the profile was applied correctly by running the following command: USD oc -n my-namespace get deployment myapp --output=jsonpath='{.spec.template.spec.securityContext}' | jq . Example output { "seccompProfile": { "localhostProfile": "operator/my-namespace/profile1.json", "type": "localhost" } } 7.6.2.1. Binding workloads to profiles with ProfileBindings You can use the ProfileBinding resource to bind a security profile to the SecurityContext of a container. Procedure To bind a pod that uses a quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 image to the example SeccompProfile profile, create a ProfileBinding object in the same namespace with the pod and the SeccompProfile objects: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SeccompProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3 1 The kind: variable refers to the kind of the profile. 2 The name: variable refers to the name of the profile. 3 You can enable a default security profile by using a wildcard in the image attribute: image: "*" Important Using the image: "*" wildcard attribute binds all new pods with a default security profile in a given namespace. Label the namespace with enable-binding=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-binding=true Define a pod named test-pod.yaml : apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 Create the pod: USD oc create -f test-pod.yaml Note If the pod already exists, you must re-create the pod for the binding to work properly. Verification Confirm the pod inherits the ProfileBinding by running the following command: USD oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seccompProfile}' Example output {"localhostProfile":"operator/my-namespace/profile.json","type":"Localhost"} 7.6.3. Recording profiles from workloads The Security Profiles Operator can record system calls with ProfileRecording objects, making it easier to create baseline profiles for applications. When using the log enricher for recording seccomp profiles, verify the log enricher feature is enabled. See Additional resources for more information. Note A container with privileged: true security context restraints prevents log-based recording. Privileged containers are not subject to seccomp policies, and log-based recording makes use of a special seccomp profile to record events. Procedure Create a project by running the following command: USD oc new-project my-namespace Label the namespace with enable-recording=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-recording=true Create a ProfileRecording object containing a recorder: logs variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SeccompProfile recorder: logs podSelector: matchLabels: app: my-app Create a workload to record: apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 Confirm the pod is in a Running state by entering the following command: USD oc -n my-namespace get pods Example output NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s Confirm the enricher indicates that it receives audit logs for those containers: USD oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher Example output I0523 14:19:08.747313 430694 enricher.go:445] log-enricher "msg"="audit" "container"="redis" "executable"="/usr/local/bin/redis-server" "namespace"="my-namespace" "node"="xiyuan-23-5g2q9-worker-eastus2-6rpgf" "pid"=656802 "pod"="my-pod" "syscallID"=0 "syscallName"="read" "timestamp"="1684851548.745:207179" "type"="seccomp" Verification Remove the pod: USD oc -n my-namepace delete pod my-pod Confirm the Security Profiles Operator reconciles the two seccomp profiles: USD oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for seccompprofile NAME STATUS AGE test-recording-nginx Installed 2m48s test-recording-redis Installed 2m48s 7.6.3.1. Merging per-container profile instances By default, each container instance records into a separate profile. The Security Profiles Operator can merge the per-container profiles into a single profile. Merging profiles is useful when deploying applications using ReplicaSet or Deployment objects. Procedure Edit a ProfileRecording object to include a mergeStrategy: containers variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SeccompProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SeccompProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record Label the namespace by running the following command: USD oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true Create the workload with the following YAML: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 To record the individual profiles, delete the deployment by running the following command: USD oc delete deployment nginx-deploy -n my-namespace To merge the profiles, delete the profile recording by running the following command: USD oc delete profilerecording test-recording -n my-namespace To start the merge operation and generate the results profile, run the following command: USD oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for seccompprofiles NAME STATUS AGE test-recording-nginx-record Installed 55s To view the permissions used by any of the containers, run the following command: USD oc get seccompprofiles test-recording-nginx-record -o yaml Additional resources Managing security context constraints Managing SCCs in OpenShift Using the log enricher About security profiles 7.7. Managing SELinux profiles Create and manage SELinux profiles and bind them to workloads. Important The Security Profiles Operator supports only Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported. 7.7.1. Creating SELinux profiles Use the SelinuxProfile object to create profiles. The SelinuxProfile object has several features that allow for better security hardening and readability: Restricts the profiles to inherit from to the current namespace or a system-wide profile. Because there are typically many profiles installed on the system, but only a subset should be used by cluster workloads, the inheritable system profiles are listed in the spod instance in spec.selinuxOptions.allowedSystemProfiles . Performs basic validation of the permissions, classes and labels. Adds a new keyword @self that describes the process using the policy. This allows reusing a policy between workloads and namespaces easily, as the usage of the policy is based on the name and namespace. Adds features for better security hardening and readability compared to writing a profile directly in the SELinux CIL language. Procedure Create a project by running the following command: USD oc new-project nginx-deploy Create a policy that can be used with a non-privileged workload by creating the following SelinuxProfile object: apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: allow: '@self': tcp_socket: - listen http_cache_port_t: tcp_socket: - name_bind node_t: tcp_socket: - node_bind inherit: - kind: System name: container Wait for selinuxd to install the policy by running the following command: USD oc wait --for=condition=ready -n nginx-deploy selinuxprofile nginx-secure Example output selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure condition met The policies are placed into an emptyDir in the container owned by the Security Profiles Operator. The policies are saved in Common Intermediate Language (CIL) format in /etc/selinux.d/<name>_<namespace>.cil . Access the pod by running the following command: USD oc -n openshift-security-profiles rsh -c selinuxd ds/spod Verification View the file contents with cat by running the following command: USD cat /etc/selinux.d/nginx-secure_nginx-deploy.cil Example output (block nginx-secure_nginx-deploy (blockinherit container) (allow process nginx-secure_nginx-deploy.process ( tcp_socket ( listen ))) (allow process http_cache_port_t ( tcp_socket ( name_bind ))) (allow process node_t ( tcp_socket ( node_bind ))) ) Verify that a policy has been installed by running the following command: USD semodule -l | grep nginx-secure Example output nginx-secure_nginx-deploy 7.7.2. Applying SELinux profiles to a pod Create a pod to apply one of the created profiles. For SELinux profiles, the namespace must be labelled to allow privileged workloads. Procedure Apply the scc.podSecurityLabelSync=false label to the nginx-deploy namespace by running the following command: USD oc label ns nginx-deploy security.openshift.io/scc.podSecurityLabelSync=false Apply the privileged label to the nginx-deploy namespace by running the following command: USD oc label ns nginx-deploy --overwrite=true pod-security.kubernetes.io/enforce=privileged Obtain the SELinux profile usage string by running the following command: USD oc get selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure -n nginx-deploy -ojsonpath='{.status.usage}' Example output nginx-secure_nginx-deploy.process Apply the output string in the workload manifest in the .spec.containers[].securityContext.seLinuxOptions attribute: apiVersion: v1 kind: Pod metadata: name: nginx-secure namespace: nginx-deploy spec: containers: - image: nginxinc/nginx-unprivileged:1.21 name: nginx securityContext: seLinuxOptions: # NOTE: This uses an appropriate SELinux type type: nginx-secure_nginx-deploy.process Important The SELinux type must exist before creating the workload. 7.7.2.1. Applying SELinux log policies To log policy violations or AVC denials, set the SElinuxProfile profile to permissive . Important This procedure defines logging policies. It does not set enforcement policies. Procedure Add permissive: true to an SElinuxProfile : apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: permissive: true 7.7.2.2. Binding workloads to profiles with ProfileBindings You can use the ProfileBinding resource to bind a security profile to the SecurityContext of a container. Procedure To bind a pod that uses a quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 image to the example SelinuxProfile profile, create a ProfileBinding object in the same namespace with the pod and the SelinuxProfile objects: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SelinuxProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3 1 The kind: variable refers to the kind of the profile. 2 The name: variable refers to the name of the profile. 3 You can enable a default security profile by using a wildcard in the image attribute: image: "*" Important Using the image: "*" wildcard attribute binds all new pods with a default security profile in a given namespace. Label the namespace with enable-binding=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-binding=true Define a pod named test-pod.yaml : apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 Create the pod: USD oc create -f test-pod.yaml Note If the pod already exists, you must re-create the pod for the binding to work properly. Verification Confirm the pod inherits the ProfileBinding by running the following command: USD oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seLinuxOptions.type}' Example output profile_nginx-binding.process 7.7.2.3. Replicating controllers and SecurityContextConstraints When you deploy SELinux policies for replicating controllers, such as deployments or daemon sets, note that the Pod objects spawned by the controllers are not running with the identity of the user who creates the workload. Unless a ServiceAccount is selected, the pods might revert to using a restricted SecurityContextConstraints (SCC) which does not allow use of custom security policies. Procedure Create a project by running the following command: USD oc new-project nginx-secure Create the following RoleBinding object to allow SELinux policies to be used in the nginx-secure namespace: kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-nginx namespace: nginx-secure subjects: - kind: ServiceAccount name: spo-deploy-test roleRef: kind: Role name: spo-nginx apiGroup: rbac.authorization.k8s.io Create the Role object: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-nginx namespace: nginx-secure rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use Create the ServiceAccount object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-deploy-test namespace: nginx-secure Create the Deployment object: apiVersion: apps/v1 kind: Deployment metadata: name: selinux-test namespace: nginx-secure metadata: labels: app: selinux-test spec: replicas: 3 selector: matchLabels: app: selinux-test template: metadata: labels: app: selinux-test spec: serviceAccountName: spo-deploy-test securityContext: seLinuxOptions: type: nginx-secure_nginx-secure.process 1 containers: - name: nginx-unpriv image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 1 The .seLinuxOptions.type must exist before the Deployment is created. Note The SELinux type is not specified in the workload and is handled by the SCC. When the pods are created by the deployment and the ReplicaSet , the pods will run with the appropriate profile. Ensure that your SCC is usable by only the correct service account. Refer to Additional resources for more information. 7.7.3. Recording profiles from workloads The Security Profiles Operator can record system calls with ProfileRecording objects, making it easier to create baseline profiles for applications. When using the log enricher for recording SELinux profiles, verify the log enricher feature is enabled. See Additional resources for more information. Note A container with privileged: true security context restraints prevents log-based recording. Privileged containers are not subject to SELinux policies, and log-based recording makes use of a special SELinux profile to record events. Procedure Create a project by running the following command: USD oc new-project my-namespace Label the namespace with enable-recording=true by running the following command: USD oc label ns my-namespace spo.x-k8s.io/enable-recording=true Create a ProfileRecording object containing a recorder: logs variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: app: my-app Create a workload to record: apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 Confirm the pod is in a Running state by entering the following command: USD oc -n my-namespace get pods Example output NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s Confirm the enricher indicates that it receives audit logs for those containers: USD oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher Example output I0517 13:55:36.383187 348295 enricher.go:376] log-enricher "msg"="audit" "container"="redis" "namespace"="my-namespace" "node"="ip-10-0-189-53.us-east-2.compute.internal" "perm"="name_bind" "pod"="my-pod" "profile"="test-recording_redis_6kmrb_1684331729" "scontext"="system_u:system_r:selinuxrecording.process:s0:c4,c27" "tclass"="tcp_socket" "tcontext"="system_u:object_r:redis_port_t:s0" "timestamp"="1684331735.105:273965" "type"="selinux" Verification Remove the pod: USD oc -n my-namepace delete pod my-pod Confirm the Security Profiles Operator reconciles the two SELinux profiles: USD oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for selinuxprofile NAME USAGE STATE test-recording-nginx test-recording-nginx_my-namespace.process Installed test-recording-redis test-recording-redis_my-namespace.process Installed 7.7.3.1. Merging per-container profile instances By default, each container instance records into a separate profile. The Security Profiles Operator can merge the per-container profiles into a single profile. Merging profiles is useful when deploying applications using ReplicaSet or Deployment objects. Procedure Edit a ProfileRecording object to include a mergeStrategy: containers variable: apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SelinuxProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SelinuxProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record Label the namespace by running the following command: USD oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true Create the workload with the following YAML: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 To record the individual profiles, delete the deployment by running the following command: USD oc delete deployment nginx-deploy -n my-namespace To merge the profiles, delete the profile recording by running the following command: USD oc delete profilerecording test-recording -n my-namespace To start the merge operation and generate the results profile, run the following command: USD oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace Example output for selinuxprofiles NAME USAGE STATE test-recording-nginx-record test-recording-nginx-record_my-namespace.process Installed To view the permissions used by any of the containers, run the following command: USD oc get selinuxprofiles test-recording-nginx-record -o yaml 7.7.3.2. About seLinuxContext: RunAsAny Recording of SELinux policies is implemented with a webhook that injects a special SELinux type to the pods being recorded. The SELinux type makes the pod run in permissive mode, logging all the AVC denials into audit.log . By default, a workload is not allowed to run with a custom SELinux policy, but uses an auto-generated type. To record a workload, the workload must use a service account that has permissions to use an SCC that allows the webhook to inject the permissive SELinux type. The privileged SCC contains seLinuxContext: RunAsAny . In addition, the namespace must be labeled with pod-security.kubernetes.io/enforce: privileged if your cluster enables the Pod Security Admission because only the privileged Pod Security Standard allows using a custom SELinux policy. Additional resources Managing security context constraints Managing SCCs in OpenShift Using the log enricher About security profiles 7.8. Advanced Security Profiles Operator tasks Use advanced tasks to enable metrics, configure webhooks, or restrict syscalls. 7.8.1. Restrict the allowed syscalls in seccomp profiles The Security Profiles Operator does not restrict syscalls in seccomp profiles by default. You can define the list of allowed syscalls in the spod configuration. Procedure To define the list of allowedSyscalls , adjust the spec parameter by running the following command: USD oc -n openshift-security-profiles patch spod spod --type merge \ -p '{"spec":{"allowedSyscalls": ["exit", "exit_group", "futex", "nanosleep"]}}' Important The Operator will install only the seccomp profiles, which have a subset of syscalls defined into the allowed list. All profiles not complying with this ruleset are rejected. When the list of allowed syscalls is modified in the spod configuration, the Operator will identify the already installed profiles which are non-compliant and remove them automatically. 7.8.2. Base syscalls for a container runtime You can use the baseProfileName attribute to establish the minimum required syscalls for a given runtime to start a container. Procedure Edit the SeccompProfile kind object and add baseProfileName: runc-v1.0.0 to the spec field: apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: example-name spec: defaultAction: SCMP_ACT_ERRNO baseProfileName: runc-v1.0.0 syscalls: - action: SCMP_ACT_ALLOW names: - exit_group 7.8.3. Enabling memory optimization in the spod daemon The controller running inside of spod daemon process watches all pods available in the cluster when profile recording is enabled. This can lead to very high memory usage in large clusters, resulting in the spod daemon running out of memory or crashing. To prevent crashes, the spod daemon can be configured to only load the pods labeled for profile recording into the cache memory. + Note SPO memory optimization is not enabled by default. Procedure Enable memory optimization by running the following command: USD oc -n openshift-security-profiles patch spod spod --type=merge -p '{"spec":{"enableMemoryOptimization":true}}' To record a security profile for a pod, the pod must be labeled with spo.x-k8s.io/enable-recording: "true" : apiVersion: v1 kind: Pod metadata: name: my-recording-pod labels: spo.x-k8s.io/enable-recording: "true" 7.8.4. Customizing daemon resource requirements The default resource requirements of the daemon container can be adjusted by using the field daemonResourceRequirements from the spod configuration. Procedure To specify the memory and cpu requests and limits of the daemon container, run the following command: USD oc -n openshift-security-profiles patch spod spod --type merge -p \ '{"spec":{"daemonResourceRequirements": { \ "requests": {"memory": "256Mi", "cpu": "250m"}, \ "limits": {"memory": "512Mi", "cpu": "500m"}}}}' 7.8.5. Setting a custom priority class name for the spod daemon pod The default priority class name of the spod daemon pod is set to system-node-critical . A custom priority class name can be configured in the spod configuration by setting a value in the priorityClassName field. Procedure Configure the priority class name by running the following command: USD oc -n openshift-security-profiles patch spod spod --type=merge -p '{"spec":{"priorityClassName":"my-priority-class"}}' Example output securityprofilesoperatordaemon.openshift-security-profiles.x-k8s.io/spod patched 7.8.6. Using metrics The openshift-security-profiles namespace provides metrics endpoints, which are secured by the kube-rbac-proxy container. All metrics are exposed by the metrics service within the openshift-security-profiles namespace. The Security Profiles Operator includes a cluster role and corresponding binding spo-metrics-client to retrieve the metrics from within the cluster. There are two metrics paths available: metrics.openshift-security-profiles/metrics : for controller runtime metrics metrics.openshift-security-profiles/metrics-spod : for the Operator daemon metrics Procedure To view the status of the metrics service, run the following command: USD oc get svc/metrics -n openshift-security-profiles Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics ClusterIP 10.0.0.228 <none> 443/TCP 43s To retrieve the metrics, query the service endpoint using the default ServiceAccount token in the openshift-security-profiles namespace by running the following command: USD oc run --rm -i --restart=Never --image=registry.fedoraproject.org/fedora-minimal:latest \ -n openshift-security-profiles metrics-test -- bash -c \ 'curl -ks -H "Authorization: Bearer USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://metrics.openshift-security-profiles/metrics-spod' Example output # HELP security_profiles_operator_seccomp_profile_total Counter about seccomp profile operations. # TYPE security_profiles_operator_seccomp_profile_total counter security_profiles_operator_seccomp_profile_total{operation="delete"} 1 security_profiles_operator_seccomp_profile_total{operation="update"} 2 To retrieve metrics from a different namespace, link the ServiceAccount to the spo-metrics-client ClusterRoleBinding by running the following command: USD oc get clusterrolebinding spo-metrics-client -o wide Example output NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS spo-metrics-client ClusterRole/spo-metrics-client 35m openshift-security-profiles/default 7.8.6.1. controller-runtime metrics The controller-runtime metrics and the DaemonSet endpoint metrics-spod provide a set of default metrics. Additional metrics are provided by the daemon, which are always prefixed with security_profiles_operator_ . Table 7.1. Available controller-runtime metrics Metric key Possible labels Type Purpose seccomp_profile_total operation={delete,update} Counter Amount of seccomp profile operations. seccomp_profile_audit_total node , namespace , pod , container , executable , syscall Counter Amount of seccomp profile audit operations. Requires the log enricher to be enabled. seccomp_profile_bpf_total node , mount_namespace , profile Counter Amount of seccomp profile bpf operations. Requires the bpf recorder to be enabled. seccomp_profile_error_total reason={ SeccompNotSupportedOnNode, InvalidSeccompProfile, CannotSaveSeccompProfile, CannotRemoveSeccompProfile, CannotUpdateSeccompProfile, CannotUpdateNodeStatus } Counter Amount of seccomp profile errors. selinux_profile_total operation={delete,update} Counter Amount of SELinux profile operations. selinux_profile_audit_total node , namespace , pod , container , executable , scontext , tcontext Counter Amount of SELinux profile audit operations. Requires the log enricher to be enabled. selinux_profile_error_total reason={ CannotSaveSelinuxPolicy, CannotUpdatePolicyStatus, CannotRemoveSelinuxPolicy, CannotContactSelinuxd, CannotWritePolicyFile, CannotGetPolicyStatus } Counter Amount of SELinux profile errors. 7.8.7. Using the log enricher The Security Profiles Operator contains a log enrichment feature, which is disabled by default. The log enricher container runs with privileged permissions to read the audit logs from the local node. The log enricher runs within the host PID namespace, hostPID . Important The log enricher must have permissions to read the host processes. Procedure Patch the spod configuration to enable the log enricher by running the following command: USD oc -n openshift-security-profiles patch spod spod \ --type=merge -p '{"spec":{"enableLogEnricher":true}}' Example output securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched Note The Security Profiles Operator will re-deploy the spod daemon set automatically. View the audit logs by running the following command: USD oc -n openshift-security-profiles logs -f ds/spod log-enricher Example output I0623 12:51:04.257814 1854764 deleg.go:130] setup "msg"="starting component: log-enricher" "buildDate"="1980-01-01T00:00:00Z" "compiler"="gc" "gitCommit"="unknown" "gitTreeState"="clean" "goVersion"="go1.16.2" "platform"="linux/amd64" "version"="0.4.0-dev" I0623 12:51:04.257890 1854764 enricher.go:44] log-enricher "msg"="Starting log-enricher on node: 127.0.0.1" I0623 12:51:04.257898 1854764 enricher.go:46] log-enricher "msg"="Connecting to local GRPC server" I0623 12:51:04.258061 1854764 enricher.go:69] log-enricher "msg"="Reading from file /var/log/audit/audit.log" 2021/06/23 12:51:04 Seeked /var/log/audit/audit.log - &{Offset:0 Whence:2} 7.8.7.1. Using the log enricher to trace an application You can use the Security Profiles Operator log enricher to trace an application. Procedure To trace an application, create a SeccompProfile logging profile: apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: name: log namespace: default spec: defaultAction: SCMP_ACT_LOG Create a pod object to use the profile: apiVersion: v1 kind: Pod metadata: name: log-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/default/log.json containers: - name: log-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 Examine the log enricher output by running the following command: USD oc -n openshift-security-profiles logs -f ds/spod log-enricher Example 7.1. Example output ... I0623 12:59:11.479869 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=3 "syscallName"="close" "timestamp"="1624453150.205:1061" "type"="seccomp" I0623 12:59:11.487323 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=157 "syscallName"="prctl" "timestamp"="1624453150.205:1062" "type"="seccomp" I0623 12:59:11.492157 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=157 "syscallName"="prctl" "timestamp"="1624453150.205:1063" "type"="seccomp" ... I0623 12:59:20.258523 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=12 "syscallName"="brk" "timestamp"="1624453150.235:2873" "type"="seccomp" I0623 12:59:20.263349 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=21 "syscallName"="access" "timestamp"="1624453150.235:2874" "type"="seccomp" I0623 12:59:20.354091 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=257 "syscallName"="openat" "timestamp"="1624453150.235:2875" "type"="seccomp" I0623 12:59:20.358844 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=5 "syscallName"="fstat" "timestamp"="1624453150.235:2876" "type"="seccomp" I0623 12:59:20.363510 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=9 "syscallName"="mmap" "timestamp"="1624453150.235:2877" "type"="seccomp" I0623 12:59:20.454127 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=3 "syscallName"="close" "timestamp"="1624453150.235:2878" "type"="seccomp" I0623 12:59:20.458654 1854764 enricher.go:111] log-enricher "msg"="audit" "container"="log-container" "executable"="/usr/sbin/nginx" "namespace"="default" "node"="127.0.0.1" "pid"=1905792 "pod"="log-pod" "syscallID"=257 "syscallName"="openat" "timestamp"="1624453150.235:2879" "type"="seccomp" ... 7.8.8. Configuring webhooks Profile binding and profile recording objects can use webhooks. Profile binding and recording object configurations are MutatingWebhookConfiguration CRs, managed by the Security Profiles Operator. To change the webhook configuration, the spod CR exposes a webhookOptions field that allows modification of the failurePolicy , namespaceSelector , and objectSelector variables. This allows you to set the webhooks to "soft-fail" or restrict them to a subset of a namespaces so that even if the webhooks failed, other namespaces or resources are not affected. Procedure Set the recording.spo.io webhook configuration to record only pods labeled with spo-record=true by creating the following patch file: spec: webhookOptions: - name: recording.spo.io objectSelector: matchExpressions: - key: spo-record operator: In values: - "true" Patch the spod/spod instance by running the following command: USD oc -n openshift-security-profiles patch spod \ spod -p USD(cat /tmp/spod-wh.patch) --type=merge To view the resulting MutatingWebhookConfiguration object, run the following command: USD oc get MutatingWebhookConfiguration \ spo-mutating-webhook-configuration -oyaml 7.9. Troubleshooting the Security Profiles Operator Troubleshoot the Security Profiles Operator to diagnose a problem or provide information in a bug report. 7.9.1. Inspecting seccomp profiles Corrupted seccomp profiles can disrupt your workloads. Ensure that the user cannot abuse the system by not allowing other workloads to map any part of the path /var/lib/kubelet/seccomp/operator . Procedure Confirm that the profile is reconciled by running the following command: USD oc -n openshift-security-profiles logs openshift-security-profiles-<id> Example 7.2. Example output I1019 19:34:14.942464 1 main.go:90] setup "msg"="starting openshift-security-profiles" "buildDate"="2020-10-19T19:31:24Z" "compiler"="gc" "gitCommit"="a3ef0e1ea6405092268c18f240b62015c247dd9d" "gitTreeState"="dirty" "goVersion"="go1.15.1" "platform"="linux/amd64" "version"="0.2.0-dev" I1019 19:34:15.348389 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"=":8080" I1019 19:34:15.349076 1 main.go:126] setup "msg"="starting manager" I1019 19:34:15.349449 1 internal.go:391] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics" I1019 19:34:15.350201 1 controller.go:142] controller "msg"="Starting EventSource" "controller"="profile" "reconcilerGroup"="security-profiles-operator.x-k8s.io" "reconcilerKind"="SeccompProfile" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"defaultAction":""}}} I1019 19:34:15.450674 1 controller.go:149] controller "msg"="Starting Controller" "controller"="profile" "reconcilerGroup"="security-profiles-operator.x-k8s.io" "reconcilerKind"="SeccompProfile" I1019 19:34:15.450757 1 controller.go:176] controller "msg"="Starting workers" "controller"="profile" "reconcilerGroup"="security-profiles-operator.x-k8s.io" "reconcilerKind"="SeccompProfile" "worker count"=1 I1019 19:34:15.453102 1 profile.go:148] profile "msg"="Reconciled profile from SeccompProfile" "namespace"="openshift-security-profiles" "profile"="nginx-1.19.1" "name"="nginx-1.19.1" "resource version"="728" I1019 19:34:15.453618 1 profile.go:148] profile "msg"="Reconciled profile from SeccompProfile" "namespace"="openshift-security-profiles" "profile"="openshift-security-profiles" "name"="openshift-security-profiles" "resource version"="729" Confirm that the seccomp profiles are saved into the correct path by running the following command: USD oc exec -t -n openshift-security-profiles openshift-security-profiles-<id> \ -- ls /var/lib/kubelet/seccomp/operator/my-namespace/my-workload Example output profile-block.json profile-complain.json 7.10. Uninstalling the Security Profiles Operator You can remove the Security Profiles Operator from your cluster by using the OpenShift Container Platform web console. 7.10.1. Uninstall the Security Profiles Operator using the web console To remove the Security Profiles Operator, you must first delete the seccomp and SELinux profiles. After the profiles are removed, you can then remove the Operator and its namespace by deleting the openshift-security-profiles project. Prerequisites Access to an OpenShift Container Platform cluster that uses an account with cluster-admin permissions. The Security Profiles Operator is installed. Procedure To remove the Security Profiles Operator by using the OpenShift Container Platform web console: Navigate to the Operators Installed Operators page. Delete all seccomp profiles, SELinux profiles, and webhook configurations. Switch to the Administration Operators Installed Operators page. Click the Options menu on the Security Profiles Operator entry and select Uninstall Operator . Switch to the Home Projects page. Search for security profiles . Click the Options menu to the openshift-security-profiles project, and select Delete Project . Confirm the deletion by typing openshift-security-profiles in the dialog box, and click Delete . Delete the MutatingWebhookConfiguration object by running the following command: USD oc delete MutatingWebhookConfiguration spo-mutating-webhook-configuration | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-security-profiles labels: openshift.io/cluster-monitoring: \"true\"",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: security-profiles-operator namespace: openshift-security-profiles",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: security-profiles-operator-sub namespace: openshift-security-profiles spec: channel: release-alpha-rhel-8 installPlanApproval: Automatic name: security-profiles-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-security-profiles",
"oc get deploy -n openshift-security-profiles",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"verbosity\":1}}'",
"securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched",
"oc new-project my-namespace",
"apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: profile1 spec: defaultAction: SCMP_ACT_LOG",
"apiVersion: v1 kind: Pod metadata: name: test-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21",
"oc -n my-namespace get seccompprofile profile1 --output wide",
"NAME STATUS AGE SECCOMPPROFILE.LOCALHOSTPROFILE profile1 Installed 14s operator/my-namespace/profile1.json",
"oc get sp profile1 --output=jsonpath='{.status.localhostProfile}'",
"operator/my-namespace/profile1.json",
"spec: template: spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json",
"oc -n my-namespace patch deployment myapp --patch-file patch.yaml --type=merge",
"deployment.apps/myapp patched",
"oc -n my-namespace get deployment myapp --output=jsonpath='{.spec.template.spec.securityContext}' | jq .",
"{ \"seccompProfile\": { \"localhostProfile\": \"operator/my-namespace/profile1.json\", \"type\": \"localhost\" } }",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SeccompProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3",
"oc label ns my-namespace spo.x-k8s.io/enable-binding=true",
"apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21",
"oc create -f test-pod.yaml",
"oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seccompProfile}'",
"{\"localhostProfile\":\"operator/my-namespace/profile.json\",\"type\":\"Localhost\"}",
"oc new-project my-namespace",
"oc label ns my-namespace spo.x-k8s.io/enable-recording=true",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SeccompProfile recorder: logs podSelector: matchLabels: app: my-app",
"apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1",
"oc -n my-namespace get pods",
"NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s",
"oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher",
"I0523 14:19:08.747313 430694 enricher.go:445] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"executable\"=\"/usr/local/bin/redis-server\" \"namespace\"=\"my-namespace\" \"node\"=\"xiyuan-23-5g2q9-worker-eastus2-6rpgf\" \"pid\"=656802 \"pod\"=\"my-pod\" \"syscallID\"=0 \"syscallName\"=\"read\" \"timestamp\"=\"1684851548.745:207179\" \"type\"=\"seccomp\"",
"oc -n my-namepace delete pod my-pod",
"oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME STATUS AGE test-recording-nginx Installed 2m48s test-recording-redis Installed 2m48s",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SeccompProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SeccompProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record",
"oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080",
"oc delete deployment nginx-deploy -n my-namespace",
"oc delete profilerecording test-recording -n my-namespace",
"oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME STATUS AGE test-recording-nginx-record Installed 55s",
"oc get seccompprofiles test-recording-nginx-record -o yaml",
"oc new-project nginx-deploy",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: allow: '@self': tcp_socket: - listen http_cache_port_t: tcp_socket: - name_bind node_t: tcp_socket: - node_bind inherit: - kind: System name: container",
"oc wait --for=condition=ready -n nginx-deploy selinuxprofile nginx-secure",
"selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure condition met",
"oc -n openshift-security-profiles rsh -c selinuxd ds/spod",
"cat /etc/selinux.d/nginx-secure_nginx-deploy.cil",
"(block nginx-secure_nginx-deploy (blockinherit container) (allow process nginx-secure_nginx-deploy.process ( tcp_socket ( listen ))) (allow process http_cache_port_t ( tcp_socket ( name_bind ))) (allow process node_t ( tcp_socket ( node_bind ))) )",
"semodule -l | grep nginx-secure",
"nginx-secure_nginx-deploy",
"oc label ns nginx-deploy security.openshift.io/scc.podSecurityLabelSync=false",
"oc label ns nginx-deploy --overwrite=true pod-security.kubernetes.io/enforce=privileged",
"oc get selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure -n nginx-deploy -ojsonpath='{.status.usage}'",
"nginx-secure_nginx-deploy.process",
"apiVersion: v1 kind: Pod metadata: name: nginx-secure namespace: nginx-deploy spec: containers: - image: nginxinc/nginx-unprivileged:1.21 name: nginx securityContext: seLinuxOptions: # NOTE: This uses an appropriate SELinux type type: nginx-secure_nginx-deploy.process",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: permissive: true",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SelinuxProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3",
"oc label ns my-namespace spo.x-k8s.io/enable-binding=true",
"apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21",
"oc create -f test-pod.yaml",
"oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seLinuxOptions.type}'",
"profile_nginx-binding.process",
"oc new-project nginx-secure",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-nginx namespace: nginx-secure subjects: - kind: ServiceAccount name: spo-deploy-test roleRef: kind: Role name: spo-nginx apiGroup: rbac.authorization.k8s.io",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-nginx namespace: nginx-secure rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use",
"apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-deploy-test namespace: nginx-secure",
"apiVersion: apps/v1 kind: Deployment metadata: name: selinux-test namespace: nginx-secure metadata: labels: app: selinux-test spec: replicas: 3 selector: matchLabels: app: selinux-test template: metadata: labels: app: selinux-test spec: serviceAccountName: spo-deploy-test securityContext: seLinuxOptions: type: nginx-secure_nginx-secure.process 1 containers: - name: nginx-unpriv image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080",
"oc new-project my-namespace",
"oc label ns my-namespace spo.x-k8s.io/enable-recording=true",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: app: my-app",
"apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1",
"oc -n my-namespace get pods",
"NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s",
"oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher",
"I0517 13:55:36.383187 348295 enricher.go:376] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"namespace\"=\"my-namespace\" \"node\"=\"ip-10-0-189-53.us-east-2.compute.internal\" \"perm\"=\"name_bind\" \"pod\"=\"my-pod\" \"profile\"=\"test-recording_redis_6kmrb_1684331729\" \"scontext\"=\"system_u:system_r:selinuxrecording.process:s0:c4,c27\" \"tclass\"=\"tcp_socket\" \"tcontext\"=\"system_u:object_r:redis_port_t:s0\" \"timestamp\"=\"1684331735.105:273965\" \"type\"=\"selinux\"",
"oc -n my-namepace delete pod my-pod",
"oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME USAGE STATE test-recording-nginx test-recording-nginx_my-namespace.process Installed test-recording-redis test-recording-redis_my-namespace.process Installed",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SelinuxProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SelinuxProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record",
"oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080",
"oc delete deployment nginx-deploy -n my-namespace",
"oc delete profilerecording test-recording -n my-namespace",
"oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME USAGE STATE test-recording-nginx-record test-recording-nginx-record_my-namespace.process Installed",
"oc get selinuxprofiles test-recording-nginx-record -o yaml",
"oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"allowedSyscalls\": [\"exit\", \"exit_group\", \"futex\", \"nanosleep\"]}}'",
"apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: example-name spec: defaultAction: SCMP_ACT_ERRNO baseProfileName: runc-v1.0.0 syscalls: - action: SCMP_ACT_ALLOW names: - exit_group",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableMemoryOptimization\":true}}'",
"apiVersion: v1 kind: Pod metadata: name: my-recording-pod labels: spo.x-k8s.io/enable-recording: \"true\"",
"oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"daemonResourceRequirements\": { \"requests\": {\"memory\": \"256Mi\", \"cpu\": \"250m\"}, \"limits\": {\"memory\": \"512Mi\", \"cpu\": \"500m\"}}}}'",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"priorityClassName\":\"my-priority-class\"}}'",
"securityprofilesoperatordaemon.openshift-security-profiles.x-k8s.io/spod patched",
"oc get svc/metrics -n openshift-security-profiles",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics ClusterIP 10.0.0.228 <none> 443/TCP 43s",
"oc run --rm -i --restart=Never --image=registry.fedoraproject.org/fedora-minimal:latest -n openshift-security-profiles metrics-test -- bash -c 'curl -ks -H \"Authorization: Bearer USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://metrics.openshift-security-profiles/metrics-spod'",
"HELP security_profiles_operator_seccomp_profile_total Counter about seccomp profile operations. TYPE security_profiles_operator_seccomp_profile_total counter security_profiles_operator_seccomp_profile_total{operation=\"delete\"} 1 security_profiles_operator_seccomp_profile_total{operation=\"update\"} 2",
"oc get clusterrolebinding spo-metrics-client -o wide",
"NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS spo-metrics-client ClusterRole/spo-metrics-client 35m openshift-security-profiles/default",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableLogEnricher\":true}}'",
"securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched",
"oc -n openshift-security-profiles logs -f ds/spod log-enricher",
"I0623 12:51:04.257814 1854764 deleg.go:130] setup \"msg\"=\"starting component: log-enricher\" \"buildDate\"=\"1980-01-01T00:00:00Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"unknown\" \"gitTreeState\"=\"clean\" \"goVersion\"=\"go1.16.2\" \"platform\"=\"linux/amd64\" \"version\"=\"0.4.0-dev\" I0623 12:51:04.257890 1854764 enricher.go:44] log-enricher \"msg\"=\"Starting log-enricher on node: 127.0.0.1\" I0623 12:51:04.257898 1854764 enricher.go:46] log-enricher \"msg\"=\"Connecting to local GRPC server\" I0623 12:51:04.258061 1854764 enricher.go:69] log-enricher \"msg\"=\"Reading from file /var/log/audit/audit.log\" 2021/06/23 12:51:04 Seeked /var/log/audit/audit.log - &{Offset:0 Whence:2}",
"apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: name: log namespace: default spec: defaultAction: SCMP_ACT_LOG",
"apiVersion: v1 kind: Pod metadata: name: log-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/default/log.json containers: - name: log-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21",
"oc -n openshift-security-profiles logs -f ds/spod log-enricher",
"... I0623 12:59:11.479869 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.205:1061\" \"type\"=\"seccomp\" I0623 12:59:11.487323 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1062\" \"type\"=\"seccomp\" I0623 12:59:11.492157 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1063\" \"type\"=\"seccomp\" ... I0623 12:59:20.258523 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=12 \"syscallName\"=\"brk\" \"timestamp\"=\"1624453150.235:2873\" \"type\"=\"seccomp\" I0623 12:59:20.263349 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=21 \"syscallName\"=\"access\" \"timestamp\"=\"1624453150.235:2874\" \"type\"=\"seccomp\" I0623 12:59:20.354091 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2875\" \"type\"=\"seccomp\" I0623 12:59:20.358844 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=5 \"syscallName\"=\"fstat\" \"timestamp\"=\"1624453150.235:2876\" \"type\"=\"seccomp\" I0623 12:59:20.363510 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=9 \"syscallName\"=\"mmap\" \"timestamp\"=\"1624453150.235:2877\" \"type\"=\"seccomp\" I0623 12:59:20.454127 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.235:2878\" \"type\"=\"seccomp\" I0623 12:59:20.458654 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2879\" \"type\"=\"seccomp\" ...",
"spec: webhookOptions: - name: recording.spo.io objectSelector: matchExpressions: - key: spo-record operator: In values: - \"true\"",
"oc -n openshift-security-profiles patch spod spod -p USD(cat /tmp/spod-wh.patch) --type=merge",
"oc get MutatingWebhookConfiguration spo-mutating-webhook-configuration -oyaml",
"oc -n openshift-security-profiles logs openshift-security-profiles-<id>",
"I1019 19:34:14.942464 1 main.go:90] setup \"msg\"=\"starting openshift-security-profiles\" \"buildDate\"=\"2020-10-19T19:31:24Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"a3ef0e1ea6405092268c18f240b62015c247dd9d\" \"gitTreeState\"=\"dirty\" \"goVersion\"=\"go1.15.1\" \"platform\"=\"linux/amd64\" \"version\"=\"0.2.0-dev\" I1019 19:34:15.348389 1 listener.go:44] controller-runtime/metrics \"msg\"=\"metrics server is starting to listen\" \"addr\"=\":8080\" I1019 19:34:15.349076 1 main.go:126] setup \"msg\"=\"starting manager\" I1019 19:34:15.349449 1 internal.go:391] controller-runtime/manager \"msg\"=\"starting metrics server\" \"path\"=\"/metrics\" I1019 19:34:15.350201 1 controller.go:142] controller \"msg\"=\"Starting EventSource\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"source\"={\"Type\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"defaultAction\":\"\"}}} I1019 19:34:15.450674 1 controller.go:149] controller \"msg\"=\"Starting Controller\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" I1019 19:34:15.450757 1 controller.go:176] controller \"msg\"=\"Starting workers\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"worker count\"=1 I1019 19:34:15.453102 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"nginx-1.19.1\" \"name\"=\"nginx-1.19.1\" \"resource version\"=\"728\" I1019 19:34:15.453618 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"openshift-security-profiles\" \"name\"=\"openshift-security-profiles\" \"resource version\"=\"729\"",
"oc exec -t -n openshift-security-profiles openshift-security-profiles-<id> -- ls /var/lib/kubelet/seccomp/operator/my-namespace/my-workload",
"profile-block.json profile-complain.json",
"oc delete MutatingWebhookConfiguration spo-mutating-webhook-configuration"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_and_compliance/security-profiles-operator |
Chapter 5. Deploying Red Hat Quay | Chapter 5. Deploying Red Hat Quay To deploy the Red Hat Quay service on the nodes in your cluster, you use the same Quay container you used to create the configuration file. The differences here are that you: Identify directories where the configuration files and data are stored Run the command with --sysctl net.core.somaxconn=4096 Don't use the config option or password For a basic setup, you can deploy on a single node; for high availability you probably want three or more nodes (for example, quay01, quay02, and quay03). Note The resulting Red Hat Quay service will listen on regular port 8080 and SSL port 8443. This is different from releases of Red Hat Quay, which listened on standard ports 80 and 443, respectively. In this document, we map 8080 and 8443 to standard ports 80 and 443 on the host, respectively. Througout the rest of this document, we assume you have mapped the ports in this way. Here is what you do: Create directories : Create two directories to store configuration information and data on the host. For example: Copy config files : Copy the tarball ( quay-config.tar.gz ) to the configuration directory and unpack it. For example: Deploy Red Hat Quay : Having already authenticated to Quay.io (see Accessing Red Hat Quay ) run Red Hat Quay as a container, as follows: Note Add -e DEBUGLOG=true to the podman run command line for the Quay container to enable debug level logging. Add -e IGNORE_VALIDATION=true to bypass validation during the startup process. Open browser to UI : Once the Quay container has started, go to your web browser and open the URL, to the node running the Quay container. Log into Red Hat Quay : Using the superuser account you created during configuration, log in and make sure Red Hat Quay is working properly. Add more Red Hat Quay nodes : At this point, you have the option of adding more nodes to this Red Hat Quay cluster by simply going to each node, then adding the tarball and starting the Quay container as just shown. Add optional features : To add more features to your Red Hat Quay cluster, such as Clair images scanning and Repository Mirroring, continue on to the section. 5.1. Add Clair image scanning to Red Hat Quay Setting up and deploying Clair image scanning for your Red Hat Quay deployment is described in Clair Security Scanning 5.2. Add repository mirroring Red Hat Quay Enabling repository mirroring allows you to create container image repositories on your Red Hat Quay cluster that exactly match the content of a selected external registry, then sync the contents of those repositories on a regular schedule and on demand. To add the repository mirroring feature to your Red Hat Quay cluster: Run the repository mirroring worker. To do this, you start a quay pod with the repomirror option. Select "Enable Repository Mirroring in the Red Hat Quay Setup tool. Log into your Red Hat Quay Web UI and begin creating mirrored repositories as described in Repository Mirroring in Red Hat Quay . The following procedure assumes you already have a running Red Hat Quay cluster on an OpenShift platform, with the Red Hat Quay Setup container running in your browser: Start the repo mirroring worker : Start the Quay container in repomirror mode. This example assumes you have configured TLS communications using a certificate that is currently stored in /root/ca.crt . If not, then remove the line that adds /root/ca.crt to the container: Log into config tool : Log into the Red Hat Quay Setup Web UI (config tool). Enable repository mirroring : Scroll down the Repository Mirroring section and select the Enable Repository Mirroring check box, as shown here: Select HTTPS and cert verification : If you want to require HTTPS communications and verify certificates during mirroring, select this check box. Save configuration : Select the Save Configuration Changes button. Repository mirroring should now be enabled on your Red Hat Quay cluster. Refer to Repository Mirroring in Red Hat Quay for details on setting up your own mirrored container image repositories. | [
"mkdir -p /mnt/quay/config #optional: if you don't choose to install an Object Store mkdir -p /mnt/quay/storage",
"cp quay-config.tar.gz /mnt/quay/config/ tar xvf quay-config.tar.gz config.yaml ssl.cert ssl.key",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --privileged=true -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.10.9",
"sudo podman run -d --name mirroring-worker -v /mnt/quay/config:/conf/stack:Z -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt registry.redhat.io/quay/quay-rhel8:v3.10.9 repomirror"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploy_red_hat_quay_-_high_availability/deploying_red_hat_quay |
Chapter 14. Deploying machine health checks | Chapter 14. Deploying machine health checks You can configure and deploy a machine health check to automatically repair damaged machines in a machine pool. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 14.1. About machine health checks Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 14.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . Additional resources About listing all the nodes in a cluster Short-circuiting machine health check remediation About the Control Plane Machine Set Operator 14.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 14.2.1. Short-circuiting machine health check remediation Short-circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple compute machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 14.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 14.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 14.3. Creating a machine health check resource You can create a MachineHealthCheck resource for machine sets in your cluster. Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml You can configure and deploy a machine health check to detect and repair unhealthy bare metal nodes. 14.4. About power-based remediation of bare metal In a bare metal cluster, remediation of nodes is critical to ensuring the overall health of the cluster. Physically remediating a cluster can be challenging and any delay in putting the machine into a safe or an operational state increases the time the cluster remains in a degraded state, and the risk that subsequent failures might bring the cluster offline. Power-based remediation helps counter such challenges. Instead of reprovisioning the nodes, power-based remediation uses a power controller to power off an inoperable node. This type of remediation is also called power fencing. OpenShift Container Platform uses the MachineHealthCheck controller to detect faulty bare metal nodes. Power-based remediation is fast and reboots faulty nodes instead of removing them from the cluster. Power-based remediation provides the following capabilities: Allows the recovery of control plane nodes Reduces the risk of data loss in hyperconverged environments Reduces the downtime associated with recovering physical machines 14.4.1. MachineHealthChecks on bare metal Machine deletion on bare metal cluster triggers reprovisioning of a bare metal host. Usually bare metal reprovisioning is a lengthy process, during which the cluster is missing compute resources and applications might be interrupted. There are two ways to change the default remediation process from machine deletion to host power-cycle: Annotate the MachineHealthCheck resource with the machine.openshift.io/remediation-strategy: external-baremetal annotation. Create a Metal3RemediationTemplate resource, and refer to it in the spec.remediationTemplate of the MachineHealthCheck . After using one of these methods, unhealthy machines are power-cycled by using Baseboard Management Controller (BMC) credentials. 14.4.2. Understanding the annotation-based remediation process The remediation process operates as follows: The MachineHealthCheck (MHC) controller detects that a node is unhealthy. The MHC notifies the bare metal machine controller which requests to power-off the unhealthy node. After the power is off, the node is deleted, which allows the cluster to reschedule the affected workload on other nodes. The bare metal machine controller requests to power on the node. After the node is up, the node re-registers itself with the cluster, resulting in the creation of a new node. After the node is recreated, the bare metal machine controller restores the annotations and labels that existed on the unhealthy node before its deletion. Note If the power operations did not complete, the bare metal machine controller triggers the reprovisioning of the unhealthy node unless this is a control plane node or a node that was provisioned externally. 14.4.3. Understanding the metal3-based remediation process The remediation process operates as follows: The MachineHealthCheck (MHC) controller detects that a node is unhealthy. The MHC creates a metal3 remediation custom resource for the metal3 remediation controller, which requests to power-off the unhealthy node. After the power is off, the node is deleted, which allows the cluster to reschedule the affected workload on other nodes. The metal3 remediation controller requests to power on the node. After the node is up, the node re-registers itself with the cluster, resulting in the creation of a new node. After the node is recreated, the metal3 remediation controller restores the annotations and labels that existed on the unhealthy node before its deletion. Note If the power operations did not complete, the metal3 remediation controller triggers the reprovisioning of the unhealthy node unless this is a control plane node or a node that was provisioned externally. 14.4.4. Creating a MachineHealthCheck resource for bare metal Prerequisites The OpenShift Container Platform is installed using installer-provisioned infrastructure (IPI). Access to BMC credentials (or BMC access to each node). Network access to the BMC interface of the unhealthy node. Procedure Create a healthcheck.yaml file that contains the definition of your machine health check. Apply the healthcheck.yaml file to your cluster using the following command: USD oc apply -f healthcheck.yaml Sample MachineHealthCheck resource for bare metal, annotation-based remediation apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: "Ready" timeout: "300s" 6 status: "False" - type: "Ready" timeout: "300s" 7 status: "Unknown" maxUnhealthy: "40%" 8 nodeStartupTimeout: "10m" 9 1 Specify the name of the machine health check to deploy. 2 For bare metal clusters, you must include the machine.openshift.io/remediation-strategy: external-baremetal annotation in the annotations section to enable power-cycle remediation. With this remediation strategy, unhealthy hosts are rebooted instead of removed from the cluster. 3 4 Specify a label for the machine pool that you want to check. 5 Specify the compute machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 6 7 Specify the timeout duration for the node condition. If the condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 8 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 9 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. Sample MachineHealthCheck resource for bare metal, metal3-based remediation apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation-template namespace: openshift-machine-api unhealthyConditions: - type: "Ready" timeout: "300s" Sample Metal3RemediationTemplate resource for bare metal, metal3-based remediation apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation-template namespace: openshift-machine-api spec: template: spec: strategy: type: Reboot retryLimit: 1 timeout: 5m0s Note The matchLabels are examples only; you must map your machine groups based on your specific needs. The annotations section does not apply to metal3-based remediation. Annotation-based remediation and metal3-based remediation are mutually exclusive. 14.4.5. Troubleshooting issues with power-based remediation To troubleshoot an issue with power-based remediation, verify the following: You have access to the BMC. BMC is connected to the control plane node that is responsible for running the remediation task. | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc apply -f healthcheck.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation-template namespace: openshift-machine-api unhealthyConditions: - type: \"Ready\" timeout: \"300s\"",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation-template namespace: openshift-machine-api spec: template: spec: strategy: type: Reboot retryLimit: 1 timeout: 5m0s"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_management/deploying-machine-health-checks |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/making-open-source-more-inclusive |
Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations | Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on infrastructure that the installation program provisions on IBM Power Virtual Server. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 4.6.1. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: "ibmcloud-resource-group" 10 serviceInstanceGUID: "powervs-region-service-instance-guid" vpcRegion : vpc-region publish: External pullSecret: '{"auths": ...}' 11 sshKey: ssh-ed25519 AAAA... 12 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as: (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The name of an existing resource group. 11 Required. The installation program prompts you for this value. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.9. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.12. steps Customize your cluster If necessary, you can opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 10 serviceInstanceGUID: \"powervs-region-service-instance-guid\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-customizations |
20.46. Configuring Memory Tuning | 20.46. Configuring Memory Tuning The virsh memtune virtual_machine --parameter size command is covered in the Virtualization Tuning and Optimization Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-configuring_memory_tuning |
Chapter 6. Tuned [tuned.openshift.io/v1] | Chapter 6. Tuned [tuned.openshift.io/v1] Description Tuned is a collection of rules that allows cluster-wide deployment of node-level sysctls and more flexibility to add custom tuning specified by user needs. These rules are translated and passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The responsibility for applying the node-level tuning then lies with the containerized Tuned daemons. More info: https://github.com/openshift/cluster-node-tuning-operator Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status status object TunedStatus is the status for a Tuned resource. 6.1.1. .spec Description spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status Type object Property Type Description managementState string managementState indicates whether the registry instance represented by this config instance is under operator management or not. Valid values are Force, Managed, Unmanaged, and Removed. profile array Tuned profiles. profile[] object A Tuned profile. recommend array Selection logic for all Tuned profiles. recommend[] object Selection logic for a single Tuned profile. 6.1.2. .spec.profile Description Tuned profiles. Type array 6.1.3. .spec.profile[] Description A Tuned profile. Type object Required data name Property Type Description data string Specification of the Tuned profile to be consumed by the Tuned daemon. name string Name of the Tuned profile to be used in the recommend section. 6.1.4. .spec.recommend Description Selection logic for all Tuned profiles. Type array 6.1.5. .spec.recommend[] Description Selection logic for a single Tuned profile. Type object Required priority profile Property Type Description machineConfigLabels object (string) MachineConfigLabels specifies the labels for a MachineConfig. The MachineConfig is created automatically to apply additional host settings (e.g. kernel boot parameters) profile 'Profile' needs and can only be applied by creating a MachineConfig. This involves finding all MachineConfigPools with machineConfigSelector matching the MachineConfigLabels and setting the profile 'Profile' on all nodes that match the MachineConfigPools' nodeSelectors. match array Rules governing application of a Tuned profile connected by logical OR operator. match[] object Rules governing application of a Tuned profile. operand object Optional operand configuration. priority integer Tuned profile priority. Highest priority is 0. profile string Name of the Tuned profile to recommend. 6.1.6. .spec.recommend[].match Description Rules governing application of a Tuned profile connected by logical OR operator. Type array 6.1.7. .spec.recommend[].match[] Description Rules governing application of a Tuned profile. Type object Required label Property Type Description label string Node or Pod label name. match array (undefined) Additional rules governing application of the tuned profile connected by logical AND operator. type string Match type: [node/pod]. If omitted, "node" is assumed. value string Node or Pod label value. If omitted, the presence of label name is enough to match. 6.1.8. .spec.recommend[].operand Description Optional operand configuration. Type object Property Type Description debug boolean turn debugging on/off for the TuneD daemon: true/false (default is false) tunedConfig object Global configuration for the TuneD daemon as defined in tuned-main.conf 6.1.9. .spec.recommend[].operand.tunedConfig Description Global configuration for the TuneD daemon as defined in tuned-main.conf Type object Property Type Description reapply_sysctl boolean turn reapply_sysctl functionality on/off for the TuneD daemon: true/false 6.1.10. .status Description TunedStatus is the status for a Tuned resource. Type object 6.2. API endpoints The following API endpoints are available: /apis/tuned.openshift.io/v1/tuneds GET : list objects of kind Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds DELETE : delete collection of Tuned GET : list objects of kind Tuned POST : create a Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} DELETE : delete a Tuned GET : read the specified Tuned PATCH : partially update the specified Tuned PUT : replace the specified Tuned 6.2.1. /apis/tuned.openshift.io/v1/tuneds Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Tuned Table 6.2. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty 6.2.2. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Tuned Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Tuned Table 6.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty HTTP method POST Description create a Tuned Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.10. Body parameters Parameter Type Description body Tuned schema Table 6.11. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 202 - Accepted Tuned schema 401 - Unauthorized Empty 6.2.3. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the Tuned namespace string object name and auth scope, such as for teams and projects Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Tuned Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Tuned Table 6.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Tuned Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.20. Body parameters Parameter Type Description body Patch schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Tuned Table 6.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.23. Body parameters Parameter Type Description body Tuned schema Table 6.24. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/node_apis/tuned-tuned-openshift-io-v1 |
Chapter 5. OLMConfig [operators.coreos.com/v1] | Chapter 5. OLMConfig [operators.coreos.com/v1] Description OLMConfig is a resource responsible for configuring OLM. Type object Required metadata 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OLMConfigSpec is the spec for an OLMConfig resource. status object OLMConfigStatus is the status for an OLMConfig resource. 5.1.1. .spec Description OLMConfigSpec is the spec for an OLMConfig resource. Type object Property Type Description features object Features contains the list of configurable OLM features. 5.1.2. .spec.features Description Features contains the list of configurable OLM features. Type object Property Type Description disableCopiedCSVs boolean DisableCopiedCSVs is used to disable OLM's "Copied CSV" feature for operators installed at the cluster scope, where a cluster scoped operator is one that has been installed in an OperatorGroup that targets all namespaces. When reenabled, OLM will recreate the "Copied CSVs" for each cluster scoped operator. 5.1.3. .status Description OLMConfigStatus is the status for an OLMConfig resource. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 5.1.4. .status.conditions Description Type array 5.1.5. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 5.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/olmconfigs DELETE : delete collection of OLMConfig GET : list objects of kind OLMConfig POST : create an OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name} DELETE : delete an OLMConfig GET : read the specified OLMConfig PATCH : partially update the specified OLMConfig PUT : replace the specified OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name}/status GET : read status of the specified OLMConfig PATCH : partially update status of the specified OLMConfig PUT : replace status of the specified OLMConfig 5.2.1. /apis/operators.coreos.com/v1/olmconfigs Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OLMConfig Table 5.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OLMConfig Table 5.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.5. HTTP responses HTTP code Reponse body 200 - OK OLMConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an OLMConfig Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body OLMConfig schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 202 - Accepted OLMConfig schema 401 - Unauthorized Empty 5.2.2. /apis/operators.coreos.com/v1/olmconfigs/{name} Table 5.9. Global path parameters Parameter Type Description name string name of the OLMConfig Table 5.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OLMConfig Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.12. Body parameters Parameter Type Description body DeleteOptions schema Table 5.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OLMConfig Table 5.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.15. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OLMConfig Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OLMConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body OLMConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty 5.2.3. /apis/operators.coreos.com/v1/olmconfigs/{name}/status Table 5.22. Global path parameters Parameter Type Description name string name of the OLMConfig Table 5.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OLMConfig Table 5.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.25. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OLMConfig Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.27. Body parameters Parameter Type Description body Patch schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OLMConfig Table 5.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.30. Body parameters Parameter Type Description body OLMConfig schema Table 5.31. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operatorhub_apis/olmconfig-operators-coreos-com-v1 |
Chapter 15. Installing a cluster on AWS with compute nodes on AWS Wavelength Zones | Chapter 15. Installing a cluster on AWS with compute nodes on AWS Wavelength Zones You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Wavelength Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Wavelength Zone subnets. AWS Wavelength Zones is an infrastructure that AWS configured for mobile edge computing (MEC) applications. A Wavelength Zone embeds AWS compute and storage services within the 5G network of a communication service provider (CSP). By placing application servers in a Wavelength Zone, the application traffic from your 5G devices can stay in the 5G network. The application traffic of the device reaches the target server directly, making latency a non-issue. Additional resources See Wavelength Zones in the AWS documentation. 15.1. Infrastructure prerequisites You reviewed details about OpenShift Container Platform installation and update processes. You are familiar with Selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Warning If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster must access. You noted the region and supported AWS Wavelength Zone locations to create the network resources in. You read AWS Wavelength features in the AWS documentation. You read the Quotas and considerations for Wavelength Zones in the AWS documentation. You added permissions for creating network resources that support AWS Wavelength Zones to the Identity and Access Management (IAM) user or role. For example: Example of an additional IAM policy that attached ec2:ModifyAvailabilityZoneGroup , ec2:CreateCarrierGateway , and ec2:DeleteCarrierGateway permissions to a user or role { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteCarrierGateway", "ec2:CreateCarrierGateway" ], "Resource": "*" }, { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } 15.2. About AWS Wavelength Zones and edge compute pool Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Wavelength Zones environment. 15.2.1. Cluster limitations in AWS Wavelength Zones Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Wavelength Zone. Important The following list details limitations when deploying a cluster in a pre-configured AWS zone: The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass parameter must be set when creating workloads on zone nodes. If you want the installation program to automatically create Wavelength Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. The following note details some of these limitations. For other limitations, ensure that you read the "Quotas and considerations for Wavelength Zones" document that Red Hat provides in the "Infrastructure prerequisites" section. Important The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster: When the installation program creates private subnets in AWS Wavelength Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region. If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Wavelength Zones subnets in an OpenShift Container Platform cluster. 15.2.2. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Wavelength Zones locations. When deploying a cluster that uses Wavelength Zones, consider the following points: Amazon EC2 instances in the Wavelength Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Wavelength Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Wavelength Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Wavelength Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How AWS Wavelength work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Wavelength Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Wavelength Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Wavelength Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Wavelength Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Wavelength Zones nodes. The new labels are: node-role.kubernetes.io/edge='' machine.openshift.io/zone-type=wavelength-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Wavelength Zones instances. Users can only run user workloads if they define tolerations in the pod specification. Additional resources MTU value selection Changing the MTU for the cluster network Understanding taints and tolerations Storage classes Ingress Controller sharding 15.3. Installation prerequisites Before you install a cluster in an AWS Wavelength Zones environment, you must configure your infrastructure so that it can adopt Wavelength Zone capabilities. 15.3.1. Opting in to an AWS Wavelength Zones If you plan to create subnets in AWS Wavelength Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Wavelength Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Wavelength Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Wavelength Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Wavelength Zones where you want to create subnets. As an example for Wavelength Zones, specify us-east-1-wl1 to use the zone us-east-1-wl1-nyc-wlz-1 (US East New York). 15.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.3.3. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 15.3.4. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 15.3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 15.3.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.4. Preparing for the installation Before you extend nodes to Wavelength Zones, you must prepare certain resources for the cluster installation environment. 15.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 15.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 15.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Wavelength Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 15.1. Machine types based on 64-bit x86 architecture for AWS Wavelength Zones r5.* t3.* Additional resources See AWS Wavelength features in the AWS documentation. 15.4.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 15.4.4. Examples of installation configuration files with edge compute pools The following examples show install-config.yaml files that contain an edge machine pool configuration. Configuration that uses an edge pool with a custom instance type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Instance types differ between locations. To verify availability in the Wavelength Zones in which the cluster runs, see the AWS documentation. Configuration that uses an edge pool with custom security groups apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. 15.5. Cluster installation options for an AWS Wavelength Zones environment Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Wavelength Zones: Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster. Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Wavelength Zones subnets to the install-config.yaml file. steps Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Wavelength Zones environment: Installing a cluster quickly in AWS Wavelength Zones Modifying an installation configuration file to use AWS Wavelength Zones 15.6. Install a cluster quickly in AWS Wavelength Zones For OpenShift Container Platform 4.15, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Wavelength Zones locations. By using this installation route, the installation program automatically creates network resources and Wavelength Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml file before you deploy the cluster. 15.6.1. Modifying an installation configuration file to use AWS Wavelength Zones Modify an install-config.yaml file to include AWS Wavelength Zones. Prerequisites You have configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster. You opted in to the Wavelength Zones group for each zone. You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml file by specifying Wavelength Zones names in the platform.aws.zones property of the edge compute pool. # ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #... 1 The AWS Region name. 2 The list of Wavelength Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. Example of a configuration to install a cluster in the us-west-2 AWS Region that extends edge nodes to Wavelength Zones in Los Angeles and Las Vegas locations apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #... Deploy your cluster. Additional resources Creating the installation configuration file Cluster limitations in AWS Wavelength Zones steps Deploying the cluster 15.7. Installing a cluster in an existing VPC that has Wavelength Zone subnets You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Wavelength Zones. You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. 15.7.1. Creating a VPC in AWS You can create a Virtual Private Cloud (VPC), and subnets for all Wavelength Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Wavelength Zones subnets not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You opted in to the AWS Wavelength Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path and the name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster. VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 15.7.2. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 15.2. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 15.7.3. Creating a VPC carrier gateway To use public subnets in your OpenShift Container Platform cluster that runs on Wavelength Zones, you must create the carrier gateway and associate the carrier gateway to the VPC. Subnets are useful for deploying load balancers or edge compute nodes. To create edge nodes or internet-facing load balancers in Wavelength Zones locations for your OpenShift Container Platform cluster, you must create the following required network components: A carrier gateway that associates to the existing VPC. A carrier route table that lists route entries. A subnet that associates to the carrier route table. Carrier gateways exist for VPCs that only contain subnets in a Wavelength Zone. The following list explains the functions of a carrier gateway in the context of an AWS Wavelength Zones location: Provides connectivity between your Wavelength Zone and the carrier network, which includes any available devices from the carrier network. Performs Network Address Translation (NAT) functions, such as translating IP addresses that are public IP addresses stored in a network border group, from Wavelength Zones to carrier IP addresses. These translation functions apply to inbound and outbound traffic. Authorizes inbound traffic from a carrier network that is located in a specific location. Authorizes outbound traffic to a carrier network and the internet. Note No inbound connection configuration exists from the internet to a Wavelength Zone through the carrier gateway. You can use the provided CloudFormation template to create a stack of the following AWS resources: One carrier gateway that associates to the VPC ID in the template. One public route table for the Wavelength Zone named as <ClusterName>-public-carrier . Default IPv4 route entry in the new route table that targets the carrier gateway. VPC gateway endpoint for an AWS Simple Storage Service (S3). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . Procedure Go to the section of the documentation named "CloudFormation template for the VPC Carrier Gateway", and then copy the syntax from the CloudFormation template for VPC Carrier Gateway template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \// ParameterKey=VpcId,ParameterValue="USD{VpcId}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{ClusterName}" 4 1 <stack_name> is the name for the CloudFormation stack, such as clusterName-vpc-carrier-gw . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 <VpcId> is the VPC ID extracted from the CloudFormation stack output created in the section named "Creating a VPC in AWS". 4 <ClusterName> is a custom value that prefixes to resources that the CloudFormation stack creates. You can use the same name that is defined in the metadata.name section of the install-config.yaml configuration file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f Verification Confirm that the CloudFormation template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameter. Ensure that you provide the parameter value to the other CloudFormation templates that you run to create for your cluster. PublicRouteTableId The ID of the Route Table in the Carrier infrastructure. Additional resources See Amazon S3 in the AWS documentation. 15.7.4. CloudFormation template for the VPC Carrier Gateway You can use the following CloudFormation template to deploy the Carrier Gateway on AWS Wavelength infrastructure. Example 15.3. CloudFormation template for VPC Carrier Gateway AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: "AWS::EC2::CarrierGateway" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "cagw"]] PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public-carrier"]] PublicRoute: Type: "AWS::EC2::Route" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 15.7.5. Creating subnets in Wavelength Zones Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Wavelength Zones. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Wavelength Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<wavelength_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{ZONE_NAME} is the value of Wavelength Zones name to create the subnets. 5 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 6 USD{ROUTE_TABLE_PUB} is the PublicRouteTableId extracted from the output of the VPC's carrier gateway CloudFormation stack. 7 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 8 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 9 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. 15.7.6. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Wavelength Zones infrastructure. Example 15.4. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] 15.7.7. Modifying an installation configuration file to use AWS Wavelength Zones subnets Modify your install-config.yaml file to include Wavelength Zones subnets. Prerequisites You created subnets by using the procedure "Creating subnets in Wavelength Zones". You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml configuration file by specifying Wavelength Zones subnets in the platform.aws.subnets parameter. Example installation configuration file with Wavelength Zones subnets # ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1 # ... 1 List of subnet IDs created in the zones: Availability and Wavelength Zones. Additional resources For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console . For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation. steps Deploying the cluster 15.8. Optional: Assign public IP addresses to edge compute nodes If your workload requires deploying the edge compute nodes in public subnets on Wavelength Zones infrastructure, you can configure the machine set manifests when installing a cluster. AWS Wavelength Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone. The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure. Important By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. Procedure Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift and manifests directory level. USD ./openshift-install create manifests --dir <installation_directory> Edit the machine set manifest that the installation program generates for the Wavelength Zones, so that the manifest gets deployed in public subnets. Specify true for the spec.template.spec.providerSpec.value.publicIP parameter. Example machine set manifest configuration for installing a cluster quickly in Wavelength Zones spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME} Example machine set manifest configuration for installing a cluster in an existing VPC that has Wavelength Zones subnets apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true 15.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 15.10. Verifying the status of the deployed cluster Verify that your OpenShift Container Platform successfully deployed on AWS Wavelength Zones. 15.10.1. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.10.2. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources For more information about accessing and understanding the OpenShift Container Platform web console, see Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 15.10.3. Verifying nodes that were created with edge compute pool After you install a cluster that uses AWS Wavelength Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation. To check the machine sets created from the subnet you added to the install-config.yaml file, run the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m To check the machines that were created from the machine sets, run the following command: USD oc get machines -n openshift-machine-api Example output To check nodes with edge roles, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f 15.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources For more information about the Telemetry service, see About remote health monitoring . steps Validating an installation . If necessary, you can opt out of remote health . | [
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DeleteCarrierGateway\", \"ec2:CreateCarrierGateway\" ], \"Resource\": \"*\" }, { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }",
"aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=wavelength-zone --all-availability-zones",
"aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #",
"apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters \\// ParameterKey=VpcId,ParameterValue=\"USD{VpcId}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{ClusterName}\" 4",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: \"AWS::EC2::CarrierGateway\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"cagw\"]] PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public-carrier\"]] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]",
"platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1",
"./openshift-install create manifests --dir <installation_directory>",
"spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1-wbclh Running c5d.2xlarge us-east-1 us-east-1-wl1-nyc-wlz-1 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h",
"oc get nodes -l node-role.kubernetes.io/edge",
"NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_aws/installing-aws-wavelength-zone |
Chapter 8. Domain Management | Chapter 8. Domain Management This section discusses concepts and configuration specific to the managed domain operating mode. For information on securing a managed domain, see the Securing a Managed Domain section of JBoss EAP How to Configure Server Security . 8.1. About Managed Domains The managed domain operating mode allows for the management of multiple JBoss EAP instances from a single control point. Centrally-managed JBoss EAP server collections are known as members of a domain. All JBoss EAP instances in a domain share a common management policy. A domain consists of one domain controller, one or more host controllers, and zero or more server groups per host. A domain controller is the central point from which the domain is controlled. It ensures that each server is configured according to the management policy of the domain. The domain controller is also a host controller. A host controller is a physical or virtual host that interacts with the domain controller to control the lifecycle of the application server instances running on its host and to assist the domain controller to manage them. Each host can contain multiple server groups. A server group is a set of server instances which have JBoss EAP installed on them and are managed and configured as one. The domain controller manages the configuration of and applications deployed onto server groups. Consequently, each server in a server group shares the same configuration and deployments. Host controllers are tied to specific physical, or virtual, hosts. You can run multiple host controllers on the same hardware if you use different configurations, ensuring their ports and other resources do not conflict. It is possible for a domain controller, a single host controller, and multiple servers to run within the same JBoss EAP instance, on the same physical system. 8.1.1. About the Domain Controller A domain controller is the JBoss EAP server instance that acts as a central management point for a domain. One host controller instance is configured to act as a domain controller. The primary responsibilities of the domain controller are: Maintain the domain's central management policy. Ensure all host controllers are aware of its current contents. Assist the host controllers in ensuring that all running JBoss EAP server instances are configured in accordance with this policy. By default, the central management policy is stored in the EAP_HOME /domain/configuration/domain.xml file. This file is required in this directory of the host controller that is set to run as the domain controller. The domain.xml file contains profile configurations available for use by the servers in the domain. A profile contains the detailed settings of the various subsystems available in that profile. The domain configuration also includes the definition of socket groups and the server group definitions. Note A JBoss EAP 7 domain controller can administer JBoss EAP 6 hosts and servers, if the hosts and servers are running JBoss EAP 6.2 or later. For more information, see Configure a JBoss EAP 7.x Domain Controller to Administer JBoss EAP 6 Instances . For more information, see the Start a Managed Domain and Domain Controller Configuration sections. 8.1.2. About Host Controllers The primary responsibility of a host controller is server management. It delegates domain management tasks and is responsible for starting and stopping the individual application server processes that run on its host. It interacts with the domain controller to help manage the communication between the servers and the domain controller. Multiple host controllers of a domain can interact with only a single domain controller. Hence, all the host controllers and server instances running on a single domain mode have a single domain controller and must belong to the same domain. By default, each host controller reads its configuration from the EAP_HOME /domain/configuration/host.xml file located in the unzipped JBoss EAP installation file on its host's file system. The host.xml file contains the following configuration information that is specific to the particular host: The names of the server instances meant to run from this installation. Configurations specific to the local physical installation. For example, named interface definitions declared in domain.xml can be mapped to an actual machine-specific IP address in host.xml . And abstract path names in domain.xml can be mapped to actual file system paths in host.xml . Any of the following configurations: How the host controller contacts the domain controller to register itself and access the domain configuration. How to find and contact a remote domain controller. Whether the host controller is to act as the domain controller For more information, see the Start a Managed Domain and Host Controller Configuration sections. 8.1.3. About Process Controllers A process controller is a small, lightweight process that is responsible for spawning the host controller process and monitoring its lifecycle. If the host controller crashes, the process controller will restart it. It also starts server processes as directed by the host controller; however, it will not automatically restart server processes that crash. The process controller logs to the EAP_HOME /domain/log/process-controller.log file. You can set JVM options for the process controller in the EAP_HOME /bin/domain.conf file using the PROCESS_CONTROLLER_JAVA_OPTS variable. 8.1.4. About Server Groups A server group is a collection of server instances that are managed and configured as one. In a managed domain, every application server instance belongs to a server group, even if it is the only member. The server instances in a group share the same profile configuration and deployed content. A domain controller and a host controller enforce the standard configuration on all server instances of every server group in its domain. A domain can consist of multiple server groups. Different server groups can be configured with different profiles and deployments. For example, a domain can be configured with different server tiers providing different services. Different server groups can also have the same profile and deployments. This can, for example, allow for rolling application upgrades where the application is upgraded on one server group and then updated on a second server group, avoiding a complete service outage. For more information, see the Configuring Server Groups section. 8.1.5. About Servers A server represents an application server instance. In a managed domain, all server instances are members of a server group. The host controller launches each server instance in its own JVM process. For more information, see the Configuring Servers section. 8.2. Navigating Domain Configurations JBoss EAP provides scalable management interfaces to support both small and large-scale managed domains. Management Console The JBoss EAP management console provides a graphical view of your domain and allows you to easily manage hosts, servers, deployments, and profiles for your domain. Configuration From the Configuration tab, you can configure the subsystems for each profile used in your domain. Different server groups in your domain may use different profiles depending the capabilities needed. Once you select the desired profile, all available subsystems for that profile are listed. For more information on configuring profiles, see Managing JBoss EAP Profiles . Runtime From the Runtime tab, you can manage servers and server groups as well as host configuration. You can browse the domain by host or by server group. From Hosts , you can configure host properties and JVM settings as well as add and configure servers for that host. From Server Groups , you can add new server groups and configure properties and JVM settings as well as add and configure servers for that server group. You can perform operations such as starting, stopping, suspending, and reloading all servers in the selected server group. From either Hosts or Server Groups , you can add new servers and configure server properties and JVM settings. You can perform operations such as starting, stopping, suspending, and reloading the selected server. You can also view runtime information, such as JVM usage, server logs, and subsystem-specific information. From Topology , you can see an overview and view detailed information for the hosts, server groups, and servers in your domain. You can perform operations on each of them, such as reloading or suspending. Deployments From the Deployments tab, you can add and deploy deployments to server groups. You can view all deployments in the content repository or view deployments deployed to a particular server group. For more information on deploying applications using the management console, see Deploy an Application in a Managed Domain Management CLI The JBoss EAP management CLI provides a command-line interface to manage hosts, servers, deployments and profiles for your domain. Subsystem configuration can be accessed once the appropriate profile is selected. Note Instructions and examples throughout this guide may contain management CLI commands for subsystem configuration that apply when running as a standalone server, for example: To adjust these management CLI commands to be run in a managed domain, you must specify the appropriate profile to configure, for example: After specifying the appropriate host, you can configure host settings and perform operations on servers on that host. After specifying the appropriate host, you can configure servers for that host. After specifying the appropriate server group, you can configure server group settings and perform operations on all servers in the selected server group. You can deploy applications in a managed domain by using the deploy management CLI command and specifying the appropriate server groups. For instructions, see Deploy an Application in a Managed Domain . 8.3. Launching a Managed Domain 8.3.1. Start a Managed Domain Domain and host controllers can be started using the domain.sh or domain.bat script provided with JBoss EAP. For a complete listing of all available startup script arguments and their purposes, use the --help argument or see the Server runtime arguments and switches section. The domain controller must be started before any slave servers in any server groups in the domain. Start the domain controller first, then start any other associated host controllers in the domain. Start the Domain Controller Start the domain controller with the host-master.xml configuration file, which is preconfigured for a dedicated domain controller. Depending on your domain setup, you will need to make additional configurations to allow host controllers to connect. Also see the following example domain setups: Set Up a Managed Domain on a Single Machine Set Up a Managed Domain on Two Machines Start a Host Controller Start the host controller with the host-slave.xml configuration file, which is preconfigured for a slave host controller. Depending on your domain setup, you will need to make additional configurations connect to, and not conflict with, the domain controller. Also see the following example domain setups: Set Up a Managed Domain on a Single Machine Set Up a Managed Domain on Two Machines 8.3.2. Domain Controller Configuration You must configure one host in the domain as the domain controller. Important It is not supported to configure multiple domain or host controllers on the same machine when using the RPM installation method to install JBoss EAP. Configure a host as the domain controller by adding the <local/> element in the <domain-controller> declaration. The <domain-controller> should include no other content. <domain-controller> <local/> </domain-controller> The host acting as the domain controller must expose a management interface that is accessible to other hosts in the domain. The HTTP interface is the standard management interface. <management-interfaces> <http-interface security-realm="ManagementRealm" http-upgrade-enabled="true"> <socket interface="management" port="USD{jboss.management.http.port:9990}"/> </http-interface> </management-interfaces> The sample minimal domain controller configuration file, EAP_HOME /domain/configuration/host-master.xml , includes these configuration settings. 8.3.3. Host Controller Configuration A host controller must be configured to connect to the domain controller so that the host controller can register itself with the domain. Important It is not supported to configure multiple domain or host controllers on the same machine when using the RPM installation method to install JBoss EAP. Use the <domain-controller> element of the configuration to configure a connection to the domain controller. <domain-controller> <remote security-realm="ManagementRealm"> <discovery-options> <static-discovery name="primary" protocol="USD{jboss.domain.master.protocol:remote}" host="USD{jboss.domain.master.address}" port="USD{jboss.domain.master.port:9990}"/> </discovery-options> </remote> </domain-controller> The sample minimal host controller configuration file, EAP_HOME /domain/configuration/host-slave.xml , includes the configuration settings to connect to a domain controller. The configuration assumes that you provide the jboss.domain.master.address property when starting the host controller. For more information on domain controller discovery, see the Domain Controller Discovery and Failover section. Depending on your domain setup, you may also need to provide authentication for the host controller to be authenticated by the domain controller. See Set Up a Managed Domain on Two Machines for details on generating a management user with a secret value and updating the host controller configuration with that value. 8.3.4. Configure the Name of a Host Every host running in a managed domain must have a unique host name. To ease administration and allow for the use of the same host configuration files on multiple hosts, the server uses the following precedence for determining the host name. If set, the host element name attribute in the host.xml configuration file. The value of the jboss.host.name system property. The value that follows the final period ( . ) character in the jboss.qualified.host.name system property, or the entire value if there is no final period ( . ) character. The value that follows the period ( . ) character in the HOSTNAME environment variable for POSIX-based operating systems, the COMPUTERNAME environment variable for Microsoft Windows, or the entire value if there is no final period ( . ) character. A host controller's name is configured in the host element at the top of the relevant host.xml configuration file, for example: <host xmlns="urn:jboss:domain:8.0" name="host1"> Use the following procedure to update the host name using the management CLI. Start the JBoss EAP host controller. Launch the management CLI, connecting to the domain controller. Use the following command to set a new host name. This modifies the host name attribute in the host-slave.xml file as follows: <host name=" NEW_HOST_NAME " xmlns="urn:jboss:domain:8.0"> Reload the host controller in order for the changes to take effect. If a host controller does not have a name set in the configuration file, you can also pass in the host name at runtime. 8.4. Managing Servers 8.4.1. Configure Server Groups The following is an example of a server group definition: <server-group name="main-server-group" profile="full"> <jvm name="default"> <heap size="64m" max-size="512m"/> </jvm> <socket-binding-group ref="full-sockets"/> <deployments> <deployment name="test-application.war" runtime-name="test-application.war"/> <deployment name="helloworld.war" runtime-name="helloworld.war" enabled="false"/> </deployments> </server-group> Server groups can be configured using the management CLI or from the management console Runtime tab. Add a Server Group The following management CLI command can be used to add a server group. Update a Server Group The following management CLI command can be used to update server group attributes. Remove a Server Group The following management CLI command can be used to remove a server group. Server Group Attributes A server group requires the following attributes: name : The server group name. profile : The server group profile name. socket-binding-group : The default socket binding group used for servers in the group. This can be overridden on a per-server basis. A server group includes the following optional attributes: management-subsystem-endpoint : Set to true to have servers belonging to the server group connect back to the host controller using the endpoint from their remoting subsystem. The remoting subsystem must be present for this to work. socket-binding-default-interface : The socket binding group default interface for this server. socket-binding-port-offset : The default offset to be added to the port values given by the socket binding group. deployments : The deployment content to be deployed on the servers in the group. jvm : The default JVM settings for all servers in the group. The host controller merges these settings with any other configuration provided in host.xml to derive the settings used to launch the server's JVM. deployment-overlays : Links between a defined deployment overlay and deployments in this server group. system-properties : The system properties to be set on servers in the group. 8.4.2. Configure Servers The default host.xml configuration file defines three servers: <servers> <server name="server-one" group="main-server-group"> </server> <server name="server-two" group="main-server-group" auto-start="true"> <socket-bindings port-offset="150"/> </server> <server name="server-three" group="other-server-group" auto-start="false"> <socket-bindings port-offset="250"/> </server> </servers> A server instance named server-one is associated with main-server-group and inherits the subsystem configuration and socket bindings specified by that server group. A server instance named server-two is also associated with main-server-group , but also defines a socket binding port-offset value, so as not to conflict with the port values used by server-one . A server instance named server-three is associated with other-server-group and uses that group's configurations. It also defines a port-offset value and sets auto-start to false so that this server does not start when the host controller starts. Servers can be configured using the management CLI or from the management console Runtime tab. Add a Server The following management CLI command can be used to add a server. Update a Server The following management CLI command can be used to update server attributes. Remove a Server The following management CLI command can be used to remove a server. Server Attributes A server requires the following attributes: name : The name of the server. group : The name of a server group from the domain model. A server includes the following optional attributes: auto-start : Whether or not this server should be started when the host controller starts. socket-binding-group : The socket binding group to which this server belongs. socket-binding-port-offset : An offset to be added to the port values given by the socket binding group for this server. update-auto-start-with-server-status : Update the auto-start attribute with the status of the server. interface : A list of fully-specified named network interfaces available for use on the server. jvm : The JVM settings for this server. If not declared, the settings are inherited from the parent server group or host. path : A list of named file system paths. system-property : A list of system properties to set on this server. 8.4.3. Start and Stop Servers You can perform operations on servers, such as starting, stopping, and reloading, from the management console by navigating to the Runtime tab and selecting the appropriate host or server group. See the below commands for performing these operations using the management CLI. Start Servers You can start a single server on a particular host. You can start all servers in a specified server group. Stop Servers You can stop a single server on a particular host. You can stop all servers in a specified server group. Reload Servers You can reload a single server on a particular host. You can reload all servers in a specified server group. Kill Servers You can kill all server processes in a specified server group. 8.5. Domain Controller Discovery and Failover When setting up a managed domain, each host controller must be configured with information needed to contact the domain controller. In JBoss EAP, each host controller can be configured with multiple options for finding the domain controller. Host controllers iterate through the list of options until one succeeds. A backup host controller can be promoted to domain contoller if there is a problem with the primary domain controller. This allows host controllers to automatically fail over to the new domain controller once it has been promoted. 8.5.1. Configure Domain Discovery Options The following is an example of how to configure a host controller with multiple options for finding the domain controller. Example: A Host Controller with Multiple Domain Controller Options <domain-controller> <remote security-realm="ManagementRealm"> <discovery-options> <static-discovery name="primary" protocol="USD{jboss.domain.master.protocol:remote}" host="172.16.81.100" port="USD{jboss.domain.master.port:9990}"/> <static-discovery name="backup" protocol="USD{jboss.domain.master.protocol:remote}" host="172.16.81.101" port="USD{jboss.domain.master.port:9990}"/> </discovery-options> </remote> </domain-controller> A static discovery option includes the following required attributes: name The name for this domain controller discovery option. host The remote domain controller's host name. port The remote domain controller's port. In the example above, the first discovery option is the one expected to succeed. The second can be used in failover situations. 8.5.2. Start a Host Controller with a Cached Domain Configuration A host controller can be started without a connection to the domain controller by using the --cached-dc option; however, the host controller must have previously cached its domain configuration locally from the domain controller. Starting a host controller with this --cached-dc option will cache the host controller's domain configuration from the domain controller. This creates a domain.cached-remote.xml file in the EAP_HOME /domain/configuration/ directory that contains the information necessary for this host controller to temporarily manage its current servers without a domain controller connection. Note By default, using the --cached-dc option only caches configuration used by this host controller, which means that it cannot be promoted to domain controller for the entire domain. See Cache the Domain Configuration for information on caching the entire domain configuration to allow a host controller to act as the domain controller. If the domain controller is unavailable when starting this host controller with --cached-dc , the host controller will start using the cached configuration saved in the domain.cached-remote.xml file. Note that this file must exist or the host controller will fail to start. While in this state, the host controller cannot modify the domain configuration, but can launch servers and manage deployments. Once started with the cached configuration, the host controller will continue to attempt to reconnect to the domain controller. Once the domain controller becomes available, the host controller will automatically reconnect to it and synchronize the domain configuration. Note that some configuration changes may require you to reload the host controller to take effect. A warning will be logged on the host controller if this occurs. 8.5.3. Promote a Host Controller to Act as Domain Controller You can promote a host controller to act as the domain controller if a problem arises with the primary domain controller. The host controller must first cache the domain configuration locally from the domain controller before it can be promoted . Cache the Domain Configuration Use the --backup option for any host controller that you might want to become the domain controller. This creates a domain.cached-remote.xml file in the EAP_HOME /domain/configuration/ directory that contains a copy of the entire domain configuration. This configuration will be used if the host controller is reconfigured to act as the domain controller. Note The ignore-unused-configuration attribute is used to determine how much configuration to cache for a particular host. A value of true means that only the configuration relevant to this host controller is cached, which would not allow it to take over as domain controller. A value of false means that the entire domain configuration is cached. The --backup argument defaults this attribute to false to cache the entire domain. However, if you set this attribute in the host.xml file, that value is used. You can also use the --cached-dc option alone to create a copy of the domain configuration, but must set ignore-unused-configuration to false in the host.xml to cache the entire domain. For example: <domain-controller> <remote username="USDlocal" security-realm="ManagementRealm" ignore-unused-configuration="false"> <discovery-options> ... </discovery-options> </remote> </domain-controller> Promote a Host Controller to Be the Domain Controller Ensure the original domain controller is stopped. Use the management CLI to connect to the host controller that is to become the new domain controller. Execute the following command to configure the host controller to act as the new domain controller. Execute the following command to reload the host controller. This host controller will now act as the domain controller. 8.6. Managed Domain Setups 8.6.1. Set Up a Managed Domain on a Single Machine You can run multiple host controllers on a single machine by using the jboss.domain.base.dir property. Important It is not supported to configure more than one JBoss EAP host controller as a system service on a single machine. Copy the EAP_HOME /domain directory for the domain controller. Copy the EAP_HOME /domain directory for a host controller. Start the domain controller using /path/to /domain1 . Start the host controller using /path/to /host1 . Note When starting a host controller, you must specify the address of the domain controller using the jboss.domain.master.address property. Additionally, since this host controller is running on the same machine as the domain controller, you must change the management interface so that it does not conflict with the domain controller's management interface. This command sets the jboss.management.http.port property. Each instance started in this manner will share the rest of the resources in the base installation directory, for example, EAP_HOME /modules/ , but use the domain configuration from the directory specified by jboss.domain.base.dir . 8.6.2. Set Up a Managed Domain on Two Machines Note You may need to configure your firewall to run this example. You can create managed domain on two machines, where one machine is a domain controller and the other machine is a host. For more information, see About the Domain Controller . IP1 = IP address of the domain controller (Machine 1) IP2 = IP address of the host (Machine 2) Create a Managed Domain on Two Machines On Machine 1 Add a management user so that the host can be authenticated by the domain controller. Use the add-user.sh script to add the management user for the host controller, HOST_NAME . Make sure to answer yes to the last prompt and note the secret value provided. This secret value will be used in the host controller configuration, and will appear similar to the line below: <secret value=" SECRET_VALUE " /> Start the domain controller. Specify the host-master.xml configuration file, which is preconfigured for a dedicated domain controller. Also, set the jboss.bind.address.management property to make the domain controller visible to other machines. On Machine 2 Update the host configuration with the user credentials. Edit EAP_HOME /domain/configuration/host-slave.xml and set the host name, HOST_NAME , and secret value, SECRET_VALUE . <host xmlns="urn:jboss:domain:8.0" name=" HOST_NAME "> <management> <security-realms> <security-realm name="ManagementRealm"> <server-identities> <secret value=" SECRET_VALUE " /> </server-identities> ... Start the host controller. Specify the host-slave.xml configuration file, which is preconfigured for a slave host controller. Also, set the jboss.domain.master.address property to connect to the domain controller and the jboss.bind.address property to set the host controller bind address. You can now manage the domain from the management CLI by specifying the domain controller address with the --controller parameter when launching. Or you can manage the domain from the management console at http:// IP1 :9990 . 8.7. Managing JBoss EAP 7.4 Version The latest version of JBoss EAP can manage JBoss EAP servers and host that are running an earlier version. See the following section for managing the latest version of JBoss EAP. Configure a JBoss EAP 7.4 domain controller to administer minor releases of JBoss EAP Note Red Hat has deprecated the JBoss EAP 7.4 domain controller that manages hosts and servers running JBoss EAP 6.x. This deprecated support functionality will be completely removed in the major JBoss EAP release, which is version 8.0. JBoss EAP 8.0 only supports hosts and servers that are running in JBoss EAP 7.4. 8.7.1. Configure a JBoss EAP 7.x Domain Controller to Administer JBoss EAP 6 Instances A JBoss EAP 7.4 domain controller can manage hosts and servers running JBoss EAP 6, if the hosts and servers are running JBoss EAP 6.2 or later. Note When a JBoss EAP 7.0 domain controller is managing JBoss EAP 7.0 hosts that are on a different patch release, the JBoss EAP 7.0 domain controller does not need any configuration changes. However, the JBoss EAP 7.0 domain controller must be running a patch release that is equal to or later than the versions on the host controllers that it manages. Complete the following tasks to successfully manage JBoss EAP 6 instances in a JBoss EAP 7 managed domain. Add the JBoss EAP 6 configuration to the JBoss EAP 7 domain controller . Update the behavior for the JBoss EAP 6 profiles . Set the server group for the JBoss EAP 6 servers . Prevent the JBoss EAP 6 instances from receiving JBoss EAP 7 updates . Once these tasks are complete, you can manage your JBoss EAP 6 servers and configurations from the JBoss EAP 7 domain controller using the management CLI. Note that JBoss EAP 6 hosts will not be able to take advantage of new JBoss EAP 7 features, such as batch processing. Warning Because the management console is optimized for the latest version of JBoss EAP, you should not use it to update your JBoss EAP 6 hosts, servers, and profiles. Use the management CLI instead when managing your JBoss EAP 6 configurations from a JBoss EAP 7 managed domain. 8.7.1.1. Add the JBoss EAP 6 Configuration to the JBoss EAP 7 Domain Controller To allow the domain controller to manage your JBoss EAP 6 servers, you must provide the JBoss EAP 6 configuration details in the JBoss EAP 7 domain configuration. You can do this by copying the JBoss EAP 6 profiles, socket binding groups, and server groups to the JBoss EAP 7 domain.xml configuration file. You will need to rename resources if any conflict with the existing names in the JBoss EAP 7 configuration. There are also some additional adjustments to make to ensure the proper behavior. The following procedure uses the JBoss EAP 6 default profile, standard-sockets socket binding group, and main-server-group server group. Edit the JBoss EAP 7 domain.xml configuration file. It is recommended to back up this file before editing. Copy the applicable JBoss EAP 6 profiles to the JBoss EAP 7 domain.xml file. This procedure assumes that the JBoss EAP 6 default profile was copied and renamed to eap6-default . JBoss EAP 7 domain.xml <profiles> ... <profile name="eap6-default"> ... </profile> </profiles> Add the necessary extensions used by this profile. If your JBoss EAP 6 profile uses subsystems that are no longer present in JBoss EAP 7, you must add the appropriate extensions to the JBoss EAP domain configuration. JBoss EAP 7 domain.xml <extensions> ... <extension module="org.jboss.as.configadmin"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.web"/> <extensions> Copy the applicable JBoss EAP 6 socket binding groups to the JBoss EAP 7 domain.xml file. This procedure assumes that the JBoss EAP 6 standard-sockets socket binding group was copied and renamed to eap6-standard-sockets . JBoss EAP 7 domain.xml <socket-binding-groups> ... <socket-binding-group name="eap6-standard-sockets" default-interface="public"> ... </socket-binding-group> </socket-binding-groups> Copy the applicable JBoss EAP 6 server groups to the JBoss EAP 7 domain.xml file. This procedure assumes that the JBoss EAP 6 main-server-group server group was copied and renamed to eap6-main-server-group . You must also update this server group to use the JBoss EAP 6 profile, eap6-default , and the JBoss EAP 6 socket binding group, eap6-standard-sockets . JBoss EAP 7 domain.xml <server-groups> ... <server-group name="eap6-main-server-group" profile="eap6-default"> ... <socket-binding-group ref="eap6-standard-sockets"/> </server-group> </server-groups> 8.7.1.2. Update the Behavior for the JBoss EAP 6 Profiles Additional updates to the profiles used by your JBoss EAP 6 instances are necessary depending on the JBoss EAP version and desired behavior. You may require additional changes depending on the subsystems and configuration that your existing JBoss EAP 6 instances use. Start the JBoss EAP 7 domain controller and launch its management CLI to perform the following updates. These examples assume that the JBoss EAP 6 profile is eap6-default . Remove the bean-validation subsystem. JBoss EAP 7 moved bean validation functionality from the ee subsystem into its own subsystem, bean-validation . If a JBoss EAP 7 domain controller sees a legacy ee subsystem, it adds the new bean-validation subsystem. However, the JBoss EAP 6 hosts will not recognize this subsystem, so it must be removed. JBoss EAP 7 Domain Controller CLI Set CDI 1.0 behavior. This is only necessary if you want CDI 1.0 behavior for your JBoss EAP 6 servers, as opposed to behavior of later CDI versions used in JBoss EAP 7. If you want CDI 1.0 behavior, make the following updates to the weld subsystem. JBoss EAP 7 Domain Controller CLI Enable datasource statistics for JBoss EAP 6.2. This is only necessary if your profile is being used by JBoss EAP 6.2 servers. JBoss EAP 6.3 introduced the statistics-enabled attribute, which defaults to false to not collect statistics; however, the JBoss EAP 6.2 behavior was to collect statistics. If this profile is used by a JBoss EAP 6.2 host and a host running a newer JBoss EAP version, the behavior would be inconsistent between hosts, which is not allowed. Therefore, profiles intended for use by a JBoss EAP 6.2 host should make the following change for their datasources. JBoss EAP 7 Domain Controller CLI 8.7.1.3. Set the Server Group for the JBoss EAP 6 Servers If you renamed the server groups, you need to update the JBoss EAP 6 host configuration to use the new server groups specified in the JBoss EAP 7 configuration. This example uses the eap6-main-server-group server group specified in the JBoss EAP 7 domain.xml . JBoss EAP 6 host-slave.xml <servers> <server name="server-one" group="eap6-main-server-group"/> <server name="server-two" group="eap6-main-server-group"> <socket-bindings port-offset="150"/> </server> </servers> Note A host cannot use features or configuration settings that were introduced in a newer version of JBoss EAP than the one the host is running. 8.7.1.4. Prevent the JBoss EAP 6 Instances from Receiving JBoss EAP 7 Updates The domain controller in a managed domain forwards configuration updates to its host controllers. You must use the host-exclude configuration to specify the resources that should be hidden from particular versions. Choose the appropriate preconfigured host-exclude option for your JBoss EAP 6 version: EAP62 , EAP63 , EAP64 , or EAP64z . The active-server-groups attribute of the host-exclude configuration specifies the list of server groups that are used by a particular version. These server groups and their associated profiles, socket binding groups, and deployment resources will be available to hosts of this version, but all others will be hidden from these hosts. This example assumes that the version is JBoss EAP 6.4.z and adds the JBoss EAP 6 server group eap6-main-server-group as an active server group. JBoss EAP 7 Domain Controller CLI /host-exclude=EAP64z:write-attribute(name=active-server-groups,value=[eap6-main-server-group]) If necessary, you can specify additional socket binding groups used by your servers using the active-socket-binding-groups attribute. This is only required for socket binding groups that are not associated with the server groups specified in active-server-groups . 8.7.2. Configure a JBoss EAP 7.4 Domain Controller to Administer Minor Releases of JBoss EAP A JBoss EAP 7.4 domain controller can manage hosts and servers running from a minor release of JBoss EAP. Complete the following tasks to successfully manage JBoss EAP 7.3 instances in a JBoss EAP 7.4 managed domain. Add the JBoss EAP 7.3 configuration to the JBoss EAP 7.4 domain controller . Set the server group for the JBoss EAP 7.3 servers . Prevent the JBoss EAP 7.3 instances from receiving JBoss EAP 7.4 updates . After you complete these tasks, you can manage your JBoss EAP 7.3 servers and configurations from the JBoss EAP 7.4 domain controller using the management CLI. Warning Because the management console is optimized for the latest version of JBoss EAP, you must use the CLI to manage your JBoss EAP 7.3 configurations from a JBoss EAP 7.4 managed domain. Do not use the management console to update JBoss EAP 7.3 hosts, servers, and profiles. 8.7.2.1. Add the JBoss EAP 7.3 Configuration to the JBoss EAP 7.4 Domain Controller To enable the domain controller to manage your JBoss EAP 7.3 servers, you must provide the JBoss EAP 7.3 configuration details in the JBoss EAP 7.4 domain configuration. You can do this by copying the JBoss EAP 7.3 profiles, socket binding groups, and server groups to the JBoss EAP 7.4 domain.xml configuration file. You must rename a resource if its name conflicts with resource names in the JBoss EAP 7.4 configuration. The following procedure uses the JBoss EAP 7.3 default profile, standard-sockets socket binding group, and main-server-group server group. Prerequisites You have copied and renamed the JBoss EAP 7.3 default profile to eap73-default . You have copied and renamed the JBoss EAP 7.3 standard-sockets socket binding group to eap73-standard-sockets . You have copied and renamed the JBoss EAP 7.3 main-server-group server group to eap73-main-server-group . You have updated the server group to use the JBoss EAP 7.3 profile, eap73-default , and to use the JBoss EAP 7.3 socket binding group, eap73-standard-sockets . Procedures Edit the JBoss EAP 7.4 domain.xml configuration file. Important Back up the JBoss EAP 7.4 domain.xml configuration file before you edit the file. Copy the applicable JBoss EAP 7.3 profiles to the JBoss EAP 7.4 domain.xml file. For example: <profiles> ... <profile name="eap73-default"> ... </profile> </profiles> Copy the applicable JBoss EAP 7.3 socket binding groups to the JBoss EAP 7.4 domain.xml file. For example: <socket-binding-groups> ... <socket-binding-group name="eap73-standard-sockets" default-interface="public"> ... </socket-binding-group> </socket-binding-groups> Copy the applicable JBoss EAP 7.3 server groups to the JBoss EAP 7.4 domain.xml file: <server-groups> ... <server-group name="eap73-main-server-group" profile="eap73-default"> ... <socket-binding-group ref="eap73-standard-sockets"/> </server-group> </server-groups> 8.7.2.2. Set the Server Group for the JBoss EAP7.3 Servers If you renamed the server groups, you need to update the JBoss EAP 7.3 host configuration to use the new server groups specified in the JBoss EAP 7.4 configuration. This example uses the eap73-main-server-group server group specified in the JBoss EAP 7.4 domain.xml . JBoss EAP 7.3 host-slave.xml <servers> <server name="server-one" group="eap73-main-server-group"/> <server name="server-two" group="eap73-main-server-group"> <socket-bindings port-offset="150"/> </server> </servers> Note A host cannot use features or configuration settings that were introduced in a newer version of JBoss EAP than the one the host is running. 8.7.2.3. Prevent the JBoss EAP 7.3 Instances from Receiving JBoss EAP 7.4 Updates A managed domain controller forwards configuration updates to its host controllers, so that a JBoss EAP 7.3 host does not receive configuration and resources that are specific to JBoss EAP 7.4. You must configure the JBoss EAP 7.3 host to ignore those resources. You can do this by setting the ignore-unused-configuration attribute on the JBoss EAP 7.3 host. Note You can also use the host-exclude configuration to instruct the domain controller to hide certain resources from hosts running certain JBoss EAP versions. For an example of how to use the host-exclude configuration, see Prevent the JBoss EAP 6 Instances from Receiving JBoss EAP 7 Updates . For JBoss EAP 7.3, you use the EAP73 host-exclude option. Set the ignore-unused-configuration attribute to true in the JBoss EAP 7.3 host controller connection configuration to the remote domain controller. JBoss EAP 7.3 host-slave.xml <domain-controller> <remote security-realm="ManagementRealm" ignore-unused-configuration="true"> <discovery-options> <static-discovery name="primary" protocol="USD{jboss.domain.master.protocol:remote}" host="USD{jboss.domain.master.address}" port="USD{jboss.domain.master.port:9990}"/> </discovery-options> </remote> </domain-controller> With this setting, only the server groups used by this host, and their associated profiles, socket binding groups, and deployment resources, are made available to the host. All other resources are ignored. 8.8. Managing JBoss EAP Profiles 8.8.1. About Profiles JBoss EAP uses profiles as a way to organize which subsystems are available to a server. A profile consists of a collection of available subsystems along with each subsystem's specific configuration. A profile with a large number of subsystems results in a server with a large set of capabilities. A profile with a small, focused set of subsystems will have fewer capabilities but a smaller footprint. JBoss EAP comes with five predefined profiles that should satisfy most use cases: default Includes commonly used subsystems, such as logging , security , datasources , infinispan , webservices , ee , ejb3 , transactions , and so on. ha Includes the subsystems provided in the default profile with the addition of the jgroups and modcluster subsystems for high availability full Includes the subsystems provided in the default profile with the addition of the messaging-activemq and iiop-openjdk subsystems full-ha Includes the subsystems provided in the full profile with the addition of the jgroups and modcluster subsystems for high availability load-balancer Includes the minimum subsystems necessary to use the built-in mod_cluster front-end load balancer to load balance other JBoss EAP instances. Note JBoss EAP offers the ability to disable extensions or unload drivers and other services manually by removing the subsystems from the configuration of existing profiles. However, for most cases this is unnecessary. Since JBoss EAP dynamically loads subsystems as they are needed, if the server or an application never use a subsystem, it will not be loaded. In cases where the existing profiles do not provide the necessary capabilities, JBoss EAP also provides the ability to define custom profiles as well. 8.8.2. Cloning a Profile JBoss EAP allows you to create a new profile in a managed domain by cloning an existing profile. This will create a copy of the original profile's configuration and subsystems. A profile can be cloned using the management CLI by using the clone operation on the desired profile to clone. You can also clone a profile from the management console by selecting the desired profile to clone and clicking Clone . 8.8.3. Creating Hierarchical Profiles In a managed domain, you can create a hierarchy of profiles. This allows you to create a base profile with common extensions that other profiles can inherit. The managed domain defines several profiles in domain.xml . If multiple profiles use the same configuration for a particular subsystem, you can configure it in just one place instead of different profiles. The values in parent profiles cannot be overridden. In addition, each profile must be self-sufficient. If an element or subsystem is referenced, then it must be defined in the profile where it is referenced. A profile can include other profiles in a hierarchy using the management CLI by using the list-add operation and providing the profile to include. | [
"/profile= PROFILE_NAME /subsystem= SUBSYSTEM_NAME :read-resource(recursive=true)",
"/subsystem=datasources/data-source=ExampleDS:read-resource",
"/profile=default/subsystem=datasources/data-source=ExampleDS:read-resource",
"/host= HOST_NAME /server= SERVER_NAME :read-resource",
"/host= HOST_NAME /server-config= SERVER_NAME :write-attribute(name= ATTRIBUTE_NAME ,value= VALUE )",
"/server-group= SERVER_GROUP_NAME :read-resource",
"EAP_HOME /bin/domain.sh --host-config=host-master.xml",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml",
"<domain-controller> <local/> </domain-controller>",
"<management-interfaces> <http-interface security-realm=\"ManagementRealm\" http-upgrade-enabled=\"true\"> <socket interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> </http-interface> </management-interfaces>",
"<domain-controller> <remote security-realm=\"ManagementRealm\"> <discovery-options> <static-discovery name=\"primary\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"USD{jboss.domain.master.address}\" port=\"USD{jboss.domain.master.port:9990}\"/> </discovery-options> </remote> </domain-controller>",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.domain.master.address= IP_ADDRESS",
"<host xmlns=\"urn:jboss:domain:8.0\" name=\"host1\">",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml",
"EAP_HOME /bin/jboss-cli.sh --connect --controller= DOMAIN_CONTROLLER_IP_ADDRESS",
"/host= EXISTING_HOST_NAME :write-attribute(name=name,value= NEW_HOST_NAME )",
"<host name=\" NEW_HOST_NAME \" xmlns=\"urn:jboss:domain:8.0\">",
"reload --host= EXISTING_HOST_NAME",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.host.name= HOST_NAME",
"<server-group name=\"main-server-group\" profile=\"full\"> <jvm name=\"default\"> <heap size=\"64m\" max-size=\"512m\"/> </jvm> <socket-binding-group ref=\"full-sockets\"/> <deployments> <deployment name=\"test-application.war\" runtime-name=\"test-application.war\"/> <deployment name=\"helloworld.war\" runtime-name=\"helloworld.war\" enabled=\"false\"/> </deployments> </server-group>",
"/server-group= SERVER_GROUP_NAME :add(profile= PROFILE_NAME ,socket-binding-group= SOCKET_BINDING_GROUP_NAME )",
"/server-group= SERVER_GROUP_NAME :write-attribute(name= ATTRIBUTE_NAME ,value= VALUE )",
"/server-group= SERVER_GROUP_NAME :remove",
"<servers> <server name=\"server-one\" group=\"main-server-group\"> </server> <server name=\"server-two\" group=\"main-server-group\" auto-start=\"true\"> <socket-bindings port-offset=\"150\"/> </server> <server name=\"server-three\" group=\"other-server-group\" auto-start=\"false\"> <socket-bindings port-offset=\"250\"/> </server> </servers>",
"/host= HOST_NAME /server-config= SERVER_NAME :add(group= SERVER_GROUP_NAME )",
"/host= HOST_NAME /server-config= SERVER_NAME :write-attribute(name= ATTRIBUTE_NAME ,value= VALUE )",
"/host= HOST_NAME /server-config= SERVER_NAME :remove",
"/host= HOST_NAME /server-config= SERVER_NAME :start",
"/server-group= SERVER_GROUP_NAME :start-servers",
"/host= HOST_NAME /server-config= SERVER_NAME :stop",
"/server-group= SERVER_GROUP_NAME :stop-servers",
"/host= HOST_NAME /server-config= SERVER_NAME :reload",
"/server-group= SERVER_GROUP_NAME :reload-servers",
"/server-group= SERVER_GROUP_NAME :kill-servers",
"<domain-controller> <remote security-realm=\"ManagementRealm\"> <discovery-options> <static-discovery name=\"primary\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"172.16.81.100\" port=\"USD{jboss.domain.master.port:9990}\"/> <static-discovery name=\"backup\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"172.16.81.101\" port=\"USD{jboss.domain.master.port:9990}\"/> </discovery-options> </remote> </domain-controller>",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml --cached-dc",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml --backup",
"<domain-controller> <remote username=\"USDlocal\" security-realm=\"ManagementRealm\" ignore-unused-configuration=\"false\"> <discovery-options> </discovery-options> </remote> </domain-controller>",
"/host=backup:write-attribute(name=domain-controller.local, value={})",
"reload --host= HOST_NAME",
"cp -r EAP_HOME /domain /path/to /domain1",
"cp -r EAP_HOME /domain /path/to /host1",
"EAP_HOME /bin/domain.sh --host-config=host-master.xml -Djboss.domain.base.dir= /path/to /domain1",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir= /path/to /host1 -Djboss.domain.master.address= IP_ADDRESS -Djboss.management.http.port= PORT",
"<secret value=\" SECRET_VALUE \" />",
"EAP_HOME /bin/domain.sh --host-config=host-master.xml -Djboss.bind.address.management= IP1",
"<host xmlns=\"urn:jboss:domain:8.0\" name=\" HOST_NAME \"> <management> <security-realms> <security-realm name=\"ManagementRealm\"> <server-identities> <secret value=\" SECRET_VALUE \" /> </server-identities>",
"EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.domain.master.address= IP1 -Djboss.bind.address= IP2",
"EAP_HOME /bin/jboss-cli.sh --connect --controller= IP1",
"<profiles> <profile name=\"eap6-default\"> </profile> </profiles>",
"<extensions> <extension module=\"org.jboss.as.configadmin\"/> <extension module=\"org.jboss.as.threads\"/> <extension module=\"org.jboss.as.web\"/> <extensions>",
"<socket-binding-groups> <socket-binding-group name=\"eap6-standard-sockets\" default-interface=\"public\"> </socket-binding-group> </socket-binding-groups>",
"<server-groups> <server-group name=\"eap6-main-server-group\" profile=\"eap6-default\"> <socket-binding-group ref=\"eap6-standard-sockets\"/> </server-group> </server-groups>",
"/profile=eap6-default/subsystem=bean-validation:remove",
"/profile=eap6-default/subsystem=weld:write-attribute(name=require-bean-descriptor,value=true) /profile=eap6-default/subsystem=weld:write-attribute(name=non-portable-mode,value=true)",
"/profile=eap6-default/subsystem=datasources/data-source=ExampleDS:write-attribute(name=statistics-enabled,value=true)",
"<servers> <server name=\"server-one\" group=\"eap6-main-server-group\"/> <server name=\"server-two\" group=\"eap6-main-server-group\"> <socket-bindings port-offset=\"150\"/> </server> </servers>",
"/host-exclude=EAP64z:write-attribute(name=active-server-groups,value=[eap6-main-server-group])",
"<profiles> <profile name=\"eap73-default\"> </profile> </profiles>",
"<socket-binding-groups> <socket-binding-group name=\"eap73-standard-sockets\" default-interface=\"public\"> </socket-binding-group> </socket-binding-groups>",
"<server-groups> <server-group name=\"eap73-main-server-group\" profile=\"eap73-default\"> <socket-binding-group ref=\"eap73-standard-sockets\"/> </server-group> </server-groups>",
"<servers> <server name=\"server-one\" group=\"eap73-main-server-group\"/> <server name=\"server-two\" group=\"eap73-main-server-group\"> <socket-bindings port-offset=\"150\"/> </server> </servers>",
"<domain-controller> <remote security-realm=\"ManagementRealm\" ignore-unused-configuration=\"true\"> <discovery-options> <static-discovery name=\"primary\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"USD{jboss.domain.master.address}\" port=\"USD{jboss.domain.master.port:9990}\"/> </discovery-options> </remote> </domain-controller>",
"/profile=full-ha:clone(to-profile=cloned-profile)",
"/profile=new-profile:list-add(name=includes, value= PROFILE_NAME )"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/domain_management |
Chapter 4. Advisories related to this release | Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2023:5745 RHSA-2023:5746 RHSA-2023:5747 RHSA-2023:5750 RHSA-2023:5751 RHSA-2023:5752 RHSA-2023:5753 Revised on 2024-05-03 15:37:07 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.9/openjdk-1709-advisory_openjdk |
Chapter 4. Creating and building an application using the CLI | Chapter 4. Creating and building an application using the CLI 4.1. Before you begin Review About the OpenShift CLI . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. You must have the OpenShift CLI ( oc ) downloaded and installed . 4.2. Logging in to the CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure Log into OpenShift Container Platform from the CLI using your username and password, with an OAuth token, or with a web browser: With username and password: USD oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify With an OAuth token: USD oc login <https://api.your-openshift-server.com> --token=<tokenID> With a web browser: USD oc login <cluster_url> --web You can now create a project or issue other commands for managing your cluster. Additional resources oc login oc logout 4.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure To create a new project, enter the following command: USD oc new-project user-getting-started --display-name="Getting Started with OpenShift" Example output Now using project "user-getting-started" on server "https://openshift.example.com:6443". Additional resources oc new-project 4.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To add the view role to the default service account in the user-getting-started project , enter the following command: USD oc adm policy add-role-to-user view -z default -n user-getting-started Additional resources Understanding authentication RBAC overview oc policy add-role-to-user 4.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front-end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You must have access to an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure To deploy an application, enter the following command: USD oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app' Example output --> Found container image 0c2f55f (12 months old) from quay.io for "quay.io/openshiftroadshow/parksmap:latest" * An image stream tag will be created as "parksmap:latest" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend ... imagestream.image.openshift.io "parksmap" created deployment.apps "parksmap" created service "parksmap" created --> Success Additional resources oc new-app 4.5.1. Creating a route External clients can access applications running on OpenShift Container Platform through the routing layer and the data object behind that is a route . The default OpenShift Container Platform router (HAProxy) uses the HTTP header of the incoming request to determine where to proxy the connection. Optionally, you can define security, such as TLS, for the route. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To retrieve the created application service, enter the following command: USD oc get service Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s To create a route, enter the following command: USD oc create route edge parksmap --service=parksmap Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc create route edge oc get 4.5.2. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. You can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To list all pods with node names, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s To list all pod details, enter the following command: USD oc describe pods Example output Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" Normal Pulled 35s kubelet Successfully pulled image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap Additional resources oc describe oc get oc label Viewing pods Viewing pod logs 4.5.3. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To scale your application from one pod instance to two pod instances, enter the following command: USD oc scale --current-replicas=1 --replicas=2 deployment/parksmap Example output deployment.apps/parksmap scaled Verification To ensure that your application scaled properly, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s To scale your application back down to one pod instance, enter the following command: USD oc scale --current-replicas=2 --replicas=1 deployment/parksmap Additional resources oc scale 4.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service is nationalparks . Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a new Python application, enter the following command: USD oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true Example output --> Found image 0406f6c (13 days old) in image stream "openshift/python" under tag "3.9-ubi9" for "python" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag "nationalparks:latest" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend ... imagestream.image.openshift.io "nationalparks" created buildconfig.build.openshift.io "nationalparks" created deployment.apps "nationalparks" created service "nationalparks" created --> Success To create a route to expose your application, nationalparks , enter the following command: USD oc create route edge nationalparks --service=nationalparks Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc new-app 4.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To connect to a database, enter the following command: USD oc new-app quay.io/centos7/mongodb-36-centos7:master --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb' Example output --> Found container image dc18f52 (3 years old) from quay.io for "quay.io/centos7/mongodb-36-centos7:master" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as "mongodb-nationalparks:master" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app ... imagestream.image.openshift.io "mongodb-nationalparks" created deployment.apps "mongodb-nationalparks" created service "mongodb-nationalparks" created --> Success Additional resources oc new-project 4.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a secret, enter the following command: USD oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb Example output secret/nationalparks-mongodb-parameters created To update the environment variable to attach the mongodb secret to the nationalpartks workload, enter the following command: USD oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks Example output deployment.apps/nationalparks updated To show the status of the nationalparks deployment, enter the following command: USD oc rollout status deployment nationalparks Example output deployment "nationalparks" successfully rolled out To show the status of the mongodb-nationalparks deployment, enter the following command: USD oc rollout status deployment mongodb-nationalparks Example output deployment "mongodb-nationalparks" successfully rolled out Additional resources oc create secret generic oc set env oc rollout status 4.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To load national parks data, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load Example output "Items inserted in database: 2893" To verify that your data is loaded properly, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all Example output (trimmed) , {"id": "Great Zimbabwe", "latitude": "-20.2674635", "longitude": "30.9337986", "name": "Great Zimbabwe"}] To add labels to the route, enter the following command: USD oc label route nationalparks type=parksmap-backend Example output route.route.openshift.io/nationalparks labeled To retrieve your routes to view your map, enter the following command: USD oc get routes Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Copy and paste the HOST/PORT path you retrieved above into your web browser. Your browser should display a map of the national parks across the world. Figure 4.1. National parks across the world Additional resources oc exec oc label oc get | [
"oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify",
"oc login <https://api.your-openshift-server.com> --token=<tokenID>",
"oc login <cluster_url> --web",
"oc new-project user-getting-started --display-name=\"Getting Started with OpenShift\"",
"Now using project \"user-getting-started\" on server \"https://openshift.example.com:6443\".",
"oc adm policy add-role-to-user view -z default -n user-getting-started",
"oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app'",
"--> Found container image 0c2f55f (12 months old) from quay.io for \"quay.io/openshiftroadshow/parksmap:latest\" * An image stream tag will be created as \"parksmap:latest\" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend imagestream.image.openshift.io \"parksmap\" created deployment.apps \"parksmap\" created service \"parksmap\" created --> Success",
"oc get service",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s",
"oc create route edge parksmap --service=parksmap",
"route.route.openshift.io/parksmap created",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None",
"oc get pods",
"NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s",
"oc describe pods",
"Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" Normal Pulled 35s kubelet Successfully pulled image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap",
"oc scale --current-replicas=1 --replicas=2 deployment/parksmap",
"deployment.apps/parksmap scaled",
"oc get pods",
"NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s",
"oc scale --current-replicas=2 --replicas=1 deployment/parksmap",
"oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true",
"--> Found image 0406f6c (13 days old) in image stream \"openshift/python\" under tag \"3.9-ubi9\" for \"python\" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag \"nationalparks:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend imagestream.image.openshift.io \"nationalparks\" created buildconfig.build.openshift.io \"nationalparks\" created deployment.apps \"nationalparks\" created service \"nationalparks\" created --> Success",
"oc create route edge nationalparks --service=nationalparks",
"route.route.openshift.io/parksmap created",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None",
"oc new-app quay.io/centos7/mongodb-36-centos7:master --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb'",
"--> Found container image dc18f52 (3 years old) from quay.io for \"quay.io/centos7/mongodb-36-centos7:master\" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as \"mongodb-nationalparks:master\" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app imagestream.image.openshift.io \"mongodb-nationalparks\" created deployment.apps \"mongodb-nationalparks\" created service \"mongodb-nationalparks\" created --> Success",
"oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb",
"secret/nationalparks-mongodb-parameters created",
"oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks",
"deployment.apps/nationalparks updated",
"oc rollout status deployment nationalparks",
"deployment \"nationalparks\" successfully rolled out",
"oc rollout status deployment mongodb-nationalparks",
"deployment \"mongodb-nationalparks\" successfully rolled out",
"oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load",
"\"Items inserted in database: 2893\"",
"oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all",
", {\"id\": \"Great Zimbabwe\", \"latitude\": \"-20.2674635\", \"longitude\": \"30.9337986\", \"name\": \"Great Zimbabwe\"}]",
"oc label route nationalparks type=parksmap-backend",
"route.route.openshift.io/nationalparks labeled",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/getting_started/openshift-cli |
Appendix B. S3 common request headers | Appendix B. S3 common request headers The following table lists the valid common request headers and their descriptions. Table B.1. Request Headers Request Header Description CONTENT_LENGTH Length of the request body. DATE Request time and date (in UTC). HOST The name of the host server. AUTHORIZATION Authorization token. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/s3-common-request-headers_dev |
16.5. Disabling and Re-enabling Service Entries | 16.5. Disabling and Re-enabling Service Entries Active services can be accessed by other services, hosts, and users within the domain. There can be situations when it is necessary to remove a host or a service from activity. However, deleting a service or a host removes the entry and all the associated configuration, and it removes it permanently. 16.5.1. Disabling Service Entries Disabling a service prevents domain users from access it without permanently removing it from the domain. This can be done by using the service-disable command. For a service, specify the principal for the service. For example: Important Disabling a host entry not only disables that host. It disables every configured service on that host as well. 16.5.2. Re-enabling Services Disabling a service essentially kills its current, active keytabs. Removing the keytabs effectively removes the service from the IdM domain without otherwise touching its configuration entry. To re-enable a service, simply use the ipa-getkeytab command. The -s option sets which IdM server to request the keytab, -p gives the principal name, and -k gives the file to which to save the keytab. For example, requesting a new HTTP keytab: | [
"[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa service-disable HTTP/server.example.com",
"ipa-getkeytab -s ipaserver.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/service-disable |
13.7. Locking Repartitioning | 13.7. Locking Repartitioning polkit enables you to set permissions for individual operations. For udisks2 , the utility for disk management services, the configuration is located at /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy . This file contains a set of actions and default values, which can be overridden by system administrator. Important Remember that polkit configuration stored in /etc overrides the configuration shipped by packages in /usr/share/ . Procedure 13.7. To Prevent Users from Changing Disks Settings Create a file with the same content as in /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy . Do not change the /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy file, your changes will be overwritten by the package update. Delete the action you do not need and add the following lines to the /etc/polkit-1/actions/org.freedesktop.udisks2.policy file: Replace no by auth_admin if you want to ensure only the root user is able to carry out the action. Save the changes. When the user tries to change the disks settings, the following message is returned: | [
"cp /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy /etc/share/polkit-1/actions/org.freedesktop.udisks2.policy",
"<action id=\"org.freedesktop.udisks2.modify-device\"> <message>Authentication is required to modify the disks settings</message> <defaults> <allow_any>no</allow_any> <allow_inactive>no</allow_inactive> <allow_active>yes</allow_active> </defaults> </action>",
"Authentication is required to modify the disks settings"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/lock-down-repartitioning |
Chapter 6. Reference materials | Chapter 6. Reference materials To learn more about the vulnerability service, or other Red Hat Insights for Red Hat Enterprise Linux services and capabilities, the following resources might also be of interest: Assessing and Monitoring Security Vulnerabilities on RHEL Systems with FedRAMP Automation Toolkit > Remediations Red Hat Insights for Red Hat Enterprise Linux Documentation Red Hat Insights for Red Hat Enterprise Linux Product Support page | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_vulnerability_service_reports_with_fedramp/ref-vuln-report |
Chapter 4. Important changes to OpenShift Jenkins images | Chapter 4. Important changes to OpenShift Jenkins images OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io . It also removes the OpenShift Jenkins Maven and NodeJS Agent images from its payload: OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . OpenShift Container Platform 4.10 deprecated the OpenShift Jenkins Maven and NodeJS Agent images. OpenShift Container Platform 4.11 removes these images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . These changes support the OpenShift Container Platform 4.10 recommendation to use multiple container Pod Templates with the Jenkins Kubernetes Plugin . 4.1. Relocation of OpenShift Jenkins images OpenShift Container Platform 4.11 makes significant changes to the location and availability of specific OpenShift Jenkins images. Additionally, you can configure when and how to update these images. What stays the same with the OpenShift Jenkins images? The Cluster Samples Operator manages the ImageStream and Template objects for operating the OpenShift Jenkins images. By default, the Jenkins DeploymentConfig object from the Jenkins pod template triggers a redeployment when the Jenkins image changes. By default, this image is referenced by the jenkins:2 image stream tag of Jenkins image stream in the openshift namespace in the ImageStream YAML file in the Samples Operator payload. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the deprecated maven and nodejs pod templates are still in the default image configuration. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the jenkins-agent-maven and jenkins-agent-nodejs image streams still exist in your cluster. To maintain these image streams, see the following section, "What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace?" What changes in the support matrix of the OpenShift Jenkins image? Each new image in the ocp-tools-4 repository in the registry.redhat.io registry supports multiple versions of OpenShift Container Platform. When Red Hat updates one of these new images, it is simultaneously available for all versions. This availability is ideal when Red Hat updates an image in response to a security advisory. Initially, this change applies to OpenShift Container Platform 4.11 and later. It is planned that this change will eventually apply to OpenShift Container Platform 4.9 and later. Previously, each Jenkins image supported only one version of OpenShift Container Platform and Red Hat might update those images sequentially over time. What additions are there with the OpenShift Jenkins and Jenkins Agent Base ImageStream and ImageStreamTag objects? By moving from an in-payload image stream to an image stream that references non-payload images, OpenShift Container Platform can define additional image stream tags. Red Hat has created a series of new image stream tags to go along with the existing "value": "jenkins:2" and "value": "image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest" image stream tags present in OpenShift Container Platform 4.10 and earlier. These new image stream tags address some requests to improve how the Jenkins-related image streams are maintained. About the new image stream tags: ocp-upgrade-redeploy To update your Jenkins image when you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag corresponds to the existing 2 image stream tag of the jenkins image stream and the latest image stream tag of the jenkins-agent-base-rhel8 image stream. It employs an image tag specific to only one SHA or image digest. When the ocp-tools-4 image changes, such as for Jenkins security advisories, Red Hat Engineering updates the Cluster Samples Operator payload. user-maintained-upgrade-redeploy To manually redeploy Jenkins after you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the least specific image version indicator available. When you redeploy Jenkins, run the following command: USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift . When you issue this command, the OpenShift Container Platform ImageStream controller accesses the registry.redhat.io image registry and stores any updated images in the OpenShift image registry's slot for that Jenkins ImageStreamTag object. Otherwise, if you do not run this command, your Jenkins deployment configuration does not trigger a redeployment. scheduled-upgrade-redeploy To automatically redeploy the latest version of the Jenkins image when it is released, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the periodic importing of image stream tags feature of the OpenShift Container Platform image stream controller, which checks for changes in the backing image. If the image changes, for example, due to a recent Jenkins security advisory, OpenShift Container Platform triggers a redeployment of your Jenkins deployment configuration. See "Configuring periodic importing of image stream tags" in the following "Additional resources." What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace? The OpenShift Jenkins Maven and NodeJS Agent images for OpenShift Container Platform were deprecated in 4.10, and are removed from the OpenShift Container Platform install payload in 4.11. They do not have alternatives defined in the ocp-tools-4 repository. However, you can work around this by using the sidecar pattern described in the "Jenkins agent" topic mentioned in the following "Additional resources" section. However, the Cluster Samples Operator does not delete the jenkins-agent-maven and jenkins-agent-nodejs image streams created by prior releases, which point to the tags of the respective OpenShift Container Platform payload images on registry.redhat.io . Therefore, you can pull updates to these images by running the following commands: USD oc import-image jenkins-agent-nodejs -n openshift USD oc import-image jenkins-agent-maven -n openshift 4.2. Customizing the Jenkins image stream tag To override the default upgrade behavior and control how the Jenkins image is upgraded, you set the image stream tag value that your Jenkins deployment configurations use. The default upgrade behavior is the behavior that existed when the Jenkins image was part of the install payload. The image stream tag names, 2 and ocp-upgrade-redeploy , in the jenkins-rhel.json image stream file use SHA-specific image references. Therefore, when those tags are updated with a new SHA, the OpenShift Container Platform image change controller automatically redeploys the Jenkins deployment configuration from the associated templates, such as jenkins-ephemeral.json or jenkins-persistent.json . For new deployments, to override that default value, you change the value of the JENKINS_IMAGE_STREAM_TAG in the jenkins-ephemeral.json Jenkins template. For example, replace the 2 in "value": "jenkins:2" with one of the following image stream tags: ocp-upgrade-redeploy , the default value, updates your Jenkins image when you upgrade OpenShift Container Platform. user-maintained-upgrade-redeploy requires you to manually redeploy Jenkins by running USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift after upgrading OpenShift Container Platform. scheduled-upgrade-redeploy periodically checks the given <image>:<tag> combination for changes and upgrades the image when it changes. The image change controller pulls the changed image and redeploys the Jenkins deployment configuration provisioned by the templates. For more information about this scheduled import policy, see the "Adding tags to image streams" in the following "Additional resources." Note To override the current upgrade value for existing deployments, change the values of the environment variables that correspond to those template parameters. Prerequisites You are running OpenShift Jenkins on OpenShift Container Platform 4.13. You know the namespace where OpenShift Jenkins is deployed. Procedure Set the image stream tag value, replacing <namespace> with namespace where OpenShift Jenkins is deployed and <image_stream_tag> with an image stream tag: Example USD oc patch dc jenkins -p '{"spec":{"triggers":[{"type":"ImageChange","imageChangeParams":{"automatic":true,"containerNames":["jenkins"],"from":{"kind":"ImageStreamTag","namespace":"<namespace>","name":"jenkins:<image_stream_tag>"}}}]}}' Tip Alternatively, to edit the Jenkins deployment configuration YAML, enter USD oc edit dc/jenkins -n <namespace> and update the value: 'jenkins:<image_stream_tag>' line. 4.3. Additional resources Adding tags to image streams Configuring periodic importing of image stream tags Jenkins agent Certified jenkins images Certified jenkins-agent-base images Certified jenkins-agent-maven images Certified jenkins-agent-nodejs images | [
"oc import-image jenkins-agent-nodejs -n openshift",
"oc import-image jenkins-agent-maven -n openshift",
"oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/jenkins/important-changes-to-openshift-jenkins-images |
function::task_open_file_handles | function::task_open_file_handles Name function::task_open_file_handles - The number of open files of the task. Synopsis Arguments task task_struct pointer. General Syntax task_open_file_handles:long(task:long) Description This function returns the number of open file handlers for the given task. | [
"function task_open_file_handles:long(task:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-open-file-handles |
Chapter 4. Migrating Templates deployments on Openshift | Chapter 4. Migrating Templates deployments on Openshift OpenShift templates were deprecated and removed from the Red Hat build of Keycloak container images. Using the Operator is the recommended alternative for deploying Red Hat build of Keycloak on OpenShift. Note OpenShift 3.x is no longer supported. You will generally need to create a Keycloak CR (of the Red Hat build of Keycloak Operator) that references an externally managed database. The PostgreSQL database with this template is managed by a DeploymentConfig. You initially retain the application_name-postgresql DeploymentConfig that was created by the template. The PostgreSQL database instance created by the DeploymentConfig will be usable by the Red Hat build of Keycloak Operator. This guide does not include directions for migrating from this instance to a self-managed database, either by an operator or your cloud provider. The Red Hat build of Keycloak Operator does not manage a database and it is required to have a database provisioned and managed separately. 4.1. Migrating deployments with the internal H2 database The following are the affected templates: sso76-ocp3-https sso76-ocp4-https sso76-ocp3-x509-https sso76-ocp4-x509-https These templates rely upon the devel database and are not supported for production use. 4.2. Migrating deployments with ephemeral PostgreSQL database The following are the affected templates: sso76-ocp3-postgresql sso76-ocp4-postgresql This template creates a PostgreSQL database without persistent storage, which is only recommended for development purposes. 4.3. Migrating deployments with persistent PostgreSQL database The following are the affected templates: sso76-ocp3-postgresql-persistent sso76-ocp4-postgresql-persistent sso76-ocp3-x509-postgresql-persistent sso76-ocp4-x509-postgresql-persistent 4.3.1. Prerequisites The instance of Red Hat Single Sign-On 7.6 was shut down so that it does not use the same database instance that will be used by Red Hat build of Keycloak . Database backup was created. You reviewed the Release Notes . 4.4. Migration process Install Red Hat build of Keycloak Operator to the namespace. Create new CRs and related Secrets. Manually migrate your template based Red Hat Single Sign-On 7.6 configuration to your new Red Hat build of Keycloak CR. See the following examples for suggested mappings between Template parameters and Keycloak CR fields. The following examples compare a Red Hat build of Keycloak Operator CR to the DeploymentConfig that was previously created by a Red Hat Single Sign-On 7.6 Template. Operator CR for Red Hat build of Keycloak apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: rhbk spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: sso-x509-https-secret DeploymentConfig for Red Hat Single Sign-On 7.6 apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: rhsso spec: replicas: 1 template: spec: volumes: - name: sso-x509-https-volume secret: secretName: sso-x509-https-secret defaultMode: 420 containers: volumeMounts: - name: sso-x509-https-volume readOnly: true env: - name: DB_SERVICE_PREFIX_MAPPING value: postgres-db=DB - name: DB_USERNAME value: username - name: DB_PASSWORD value: password The following tables refer to fields of Keycloak CR by a JSON path notation. For example, .spec refers to the spec field. Note that spec.unsupported is a Technology Preview field. It is more an indication that eventually that functionality will be achievable by other CR fields. Parameters marked in bold are supported by both the passthrough and reencrypt templates. 4.4.1. General Parameter Migration Red Hat Single Sign-On 7.6 Red Hat build of Keycloak 24.0 APPLICATION_NAME .metadata.name IMAGE_STREAM_NAMESPACE N/A - the image is controlled by the operator or you main use spec.image to specify a custom image SSO_ADMIN_USERNAME No direct setting, defaults to admin SSO_ADMIN_PASSWORD N/A - created by the operator during the initial reconciliation MEMORY_LIMIT .spec.unsupported.podTemplate.spec.containers[0].resources.limits['memory'] SSO_SERVICE_PASSWORD , SSO_SERVICE_USERNAME No longer used. SSO_TRUSTSTORE , SSO_TRUSTSTORE_PASSWORD , SSO_TRUSTSTORE_SECRET .spec.truststores Notice that truststores must not be password protected. SSO_REALM Not needed if you are reusing the existing database. An alternative is the RealmImport CR. 4.4.2. Database Deployment Parameter Migration POSTGRESQL_IMAGE_STREAM_TAG , POSTGRESQL_MAX_CONNECTIONS , VOLUME_CAPACITY and POSTGRESQL_SHARED_BUFFERS will need to be migrated to whatever replacement you have chosen creating the database deployment. 4.4.3. Database Connection Parameter Migration Red Hat Single Sign-On 7.6 Red Hat build of Keycloak 24.0 DB_VENDOR .spec.db.vendor - will need to be set to PostgreSQL if PostgreSQL is still being used DB_DATABASE .spec.db.database DB_MIN_POOL_SIZE .spec.db.poolMinSize DB_MAX_POOL_SIZE .spec.db.maxPoolSize DB_TX_ISOLATION may be set by the spec.db.url if it is supported by the driver or as a general setting on the target database DB_USERNAME .spec.db.usernameSecret DB_PASSWORD .spec.db.passwordSecret DB_JNDI No longer applicable 4.4.4. Networking Parameter Migration Red Hat Single Sign-On 7.6 Red Hat build of Keycloak 24.0 HOSTNAME_HTTP .spec.hostname.hostname - with .spec.http.httpEnabled=true. Since the Red Hat build of Keycloak operator will only create a single Ingress/Route, for this to create an http route .spec.http.tlsSecret needs to be left unspecified HOSTNAME_HTTPS .spec.hostname.hostname - with .spec.http.tlsSecret specified. SSO_HOSTNAME .spec.hostname.hostname HTTPS_SECRET .spec.http.tlsSecret - see the other HTTPS parameters below HTTPS_KEYSTORE HTTPS_KEYSTORE_TYPE HTTPS_NAME HTTPS_PASSWORD No longer applicable. The secret referenced by .spec.http.tlsSecret should be of type kubernetes.io/tls with tls.crt and tls.key entries X509_CA_BUNDLE .spec.truststores Note that the Red Hat build of Keycloak Operator does not currently support a way to configure the TLS termination. By default, the passthrough strategy is used. Therefore, the proxy option is not yet exposed as a first-class citizen option field, because it does not matter whether the passthrough or reencrypt strategy is used. However, if you need this option, you can replace the default Ingress Operator certificate and manually configure a Route in order to trust Red Hat build of Keycloak's certificate. The default behavior of the Red Hat build of Keycloak Operator can be then overridden by: additionalOptions: name: proxy value: reencrypt 4.4.5. JGroups Parameter Migration JGROUPS_ENCRYPT_SECRET, JGROUPS_ENCRYPT_KEYSTORE, JGROUPS_ENCRYPT_NAME, JGROUPS_ENCRYPT_PASSWORD, and JGROUPS_CLUSTER_PASSWORD have no first-class representation in the Keycloak CR. Securing cache communication is still possible using the cache configuration file. Additional resources Configuring distributed cache | [
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: rhbk spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: sso-x509-https-secret",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: rhsso spec: replicas: 1 template: spec: volumes: - name: sso-x509-https-volume secret: secretName: sso-x509-https-secret defaultMode: 420 containers: volumeMounts: - name: sso-x509-https-volume readOnly: true env: - name: DB_SERVICE_PREFIX_MAPPING value: postgres-db=DB - name: DB_USERNAME value: username - name: DB_PASSWORD value: password",
"additionalOptions: name: proxy value: reencrypt"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/migration_guide/migrating-openshift |
Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview | Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview OpenShift Container Platform 4 clusters are different from OpenShift Container Platform 3 clusters. OpenShift Container Platform 4 clusters contain new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. To learn more about migrating from OpenShift Container Platform 3 to 4 see About migrating from OpenShift Container Platform 3 to 4 . 1.1. Differences between OpenShift Container Platform 3 and 4 Before migrating from OpenShift Container Platform 3 to 4, you can check differences between OpenShift Container Platform 3 and 4 . Review the following information: Architecture Installation and update Storage , network , logging , security , and monitoring considerations 1.2. Planning network considerations Before migrating from OpenShift Container Platform 3 to 4, review the differences between OpenShift Container Platform 3 and 4 for information about the following areas: DNS considerations Isolating the DNS domain of the target cluster from the clients . Setting up the target cluster to accept the source DNS domain . You can migrate stateful application workloads from OpenShift Container Platform 3 to 4 at the granularity of a namespace. To learn more about MTC see Understanding MTC . Note If you are migrating from OpenShift Container Platform 3, see About migrating from OpenShift Container Platform 3 to 4 and Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . 1.3. Installing MTC Review the following tasks to install the MTC: Install the Migration Toolkit for Containers Operator on target cluster by using Operator Lifecycle Manager (OLM) . Install the legacy Migration Toolkit for Containers Operator on the source cluster manually . Configure object storage to use as a replication repository . 1.4. Upgrading MTC You upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.15 by using OLM. You upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. 1.5. Reviewing premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the premigration checklists . 1.6. Migrating applications You can migrate your applications by using the MTC web console or the command line . 1.7. Advanced migration options You can automate your migrations and modify MTC custom resources to improve the performance of large-scale migrations by using the following options: Running a state migration Creating migration hooks Editing, excluding, and mapping migrated resources Configuring the migration controller for large migrations 1.8. Troubleshooting migrations You can perform the following troubleshooting tasks: Viewing migration plan resources by using the MTC web console Viewing the migration plan aggregated log file Using the migration log reader Accessing performance metrics Using the must-gather tool Using the Velero CLI to debug Backup and Restore CRs Using MTC custom resources for troubleshooting Checking common issues and concerns 1.9. Rolling back a migration You can roll back a migration by using the MTC web console, by using the CLI, or manually. 1.10. Uninstalling MTC and deleting resources You can uninstall the MTC and delete its resources to clean up the cluster. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migrating_from_version_3_to_4/migration-from-version-3-to-4-overview |
5.309. spice-server | 5.309. spice-server 5.309.1. RHBA-2012:0765 - spice-server bug fix and enhancement update Updated spice-server packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Simple Protocol for Independent Computing Environments (SPICE) is a remote display protocol for virtual environments. SPICE users can access a virtualized desktop or server from the local system or any system with network access to the server. SPICE is used in Red Hat Enterprise Linux for viewing virtualized guests running on the Kernel-based Virtual Machine (KVM) hypervisor or on Red Hat Enterprise Virtualization Hypervisors. The spice-server packages have been upgraded to upstream version 0.10.1, which fixes multiple bugs and adds multiple enhancements. (BZ# 758089 ) Bug Fixes BZ# 741259 Prior to this update, the smart card channel looked for the error code at the wrong location. As a consequence, the error messages contained random code instead of the actual error code. This update modifies the smart card channel code so that correct error messages are now sent. BZ# 787669 Prior to this update, the server rejected connections without logging any information to the qemu log if the client provided a wrong password. This update modifies qemu-kvm so that the messages "Invalid password" or "Ticket has expired" are sent when the client provides a wrong password. BZ# 787678 Prior to this update, qemu did not log X.509 files. As a consequence, no output regarding certificates or keys was available. This update modifies the underlying code so that information on X.509 files is now available. BZ# 788444 Prior to this update, the "struct sockaddr" code in the spice server library API was too short to hold longer IPv6 addresses. As a consequence, the reported IPv6 address appeared to be broken or incomplete. This update modifies the underlying code to use "struct sockaddr_storate" that can now hold complete IPV6 addresses. BZ# 790749 Prior to this update, the default lifetime of the "SpiceChannelEventInfo" event was too short for the "main_dispatcher_handle_channel" event. As a consequence, freed memory could be accessed after the RedsStream was freed for the cursor and display channels. This update allocates the "SpiceChannelEventInfo" event together with allocating the "RedsStream" event, and deallocates it only after the "DESTROY" event. BZ# 813826 Prior to this update, the display driver could send bitmaps to the spice server that contained video frames, but were larger than the frames sent before. As a consequence, the larger frames were not synchronized with the video stream, and their display time could differ from the display time of other frames and the playback seemed to skip and interrupt. With this update, large bitmaps are directly attached to the video stream they contain. Now, the playback is smooth and no longer interrupts. Enhancement BZ# 758091 Prior to this update, USB devices could not be redirected over the network. This update adds USB redirection support to the to spice-server. All users requiring spice-server are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/spice-server |
Serverless | Serverless OpenShift Container Platform 4.10 OpenShift Serverless installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/serverless/index |
Preface | Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 6.9 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release, as well as known problems. The Technical Notes document provides a list of notable bug fixes, all currently available Technology Previews, deprecated functionality, and other information. Capabilities and limits of Red Hat Enterprise Linux 6 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/pref-red_hat_enterprise_linux-6.9_release_notes-preface |
Chapter 3. Installing a cluster on Nutanix in a restricted network | Chapter 3. Installing a cluster on Nutanix in a restricted network In OpenShift Container Platform 4.13, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 3.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 3.2. About installations in restricted networks In OpenShift Container Platform 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 3.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. Have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Verify that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 3.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 3.6.1.4. Additional Nutanix configuration parameters Additional Nutanix configuration parameters are described in the following table: Table 3.4. Additional Nutanix cluster parameters Parameter Description Values compute.platform.nutanix.categories.key The name of a prism category key to apply to compute VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String compute.platform.nutanix.categories.value The value of a prism category key-value pair to apply to compute VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String compute.platform.nutanix.project.type The type of identifier you use to select a project for compute VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid compute.platform.nutanix.project.name or compute.platform.nutanix.project.uuid The name or UUID of a project with which compute VMs are associated. This parameter must be accompanied by the type parameter. String compute.platform.nutanix.bootType The boot type that the compute machines use. You must use the Legacy boot type in OpenShift Container Platform 4.13. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . controlPlane.platform.nutanix.categories.key The name of a prism category key to apply to control plane VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String controlPlane.platform.nutanix.categories.value The value of a prism category key-value pair to apply to control plane VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String controlPlane.platform.nutanix.project.type The type of identifier you use to select a project for control plane VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid controlPlane.platform.nutanix.project.name or controlPlane.platform.nutanix.project.uuid The name or UUID of a project with which control plane VMs are associated. This parameter must be accompanied by the type parameter. String platform.nutanix.defaultMachinePlatform.categories.key The name of a prism category key to apply to all VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String platform.nutanix.defaultMachinePlatform.categories.value The value of a prism category key-value pair to apply to all VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String platform.nutanix.defaultMachinePlatform.project.type The type of identifier you use to select a project for all VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid . platform.nutanix.defaultMachinePlatform.project.name or platform.nutanix.defaultMachinePlatform.project.uuid The name or UUID of a project with which all VMs are associated. This parameter must be accompanied by the type parameter. String platform.nutanix.defaultMachinePlatform.bootType The boot type for all machines. You must use the Legacy boot type in OpenShift Container Platform 4.13. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . platform.nutanix.apiVIP The virtual IP (VIP) address that you configured for control plane API access. IP address platform.nutanix.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. IP address platform.nutanix.prismCentral.endpoint.address The Prism Central domain name or IP address. String platform.nutanix.prismCentral.endpoint.port The port that is used to log into Prism Central. String platform.nutanix.prismCentral.password The password for the Prism Central user name. String platform.nutanix.prismCentral.username The user name that is used to log into Prism Central. String platform.nutanix.prismElments.endpoint.address The Prism Element domain name or IP address. [ 1 ] String platform.nutanix.prismElments.endpoint.port The port that is used to log into Prism Element. String platform.nutanix.prismElements.uuid The universally unique identifier (UUID) for Prism Element. String platform.nutanix.subnetUUIDs The UUID of the Prism Element network that contains the virtual IP addresses and DNS records that you configured. [ 2 ] String platform.nutanix.clusterOSImage Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 The prismElements section holds a list of Prism Elements (clusters). A Prism Element encompasses all of the Nutanix resources, for example virtual machines and subnets, that are used to host the OpenShift Container Platform cluster. Only a single Prism Element is supported. Only one subnet per OpenShift Container Platform cluster is supported. 3.6.2. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 3.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=nutanix \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on Nutanix 0000_26_cloud-controller-manager-operator_18_credentialsrequest-nutanix.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. Use the ccoctl tool to process all of the CredentialsRequest objects in the credrequests directory by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Specify the directory that contains the files of the component credentials secrets, under the manifests directory. By default, the ccoctl tool creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . To specify a different directory, use the --credentials-source-filepath flag. Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 3.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.10. Post installation Complete the following steps to complete the configuration of your cluster. 3.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 3.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 3.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 3.12. Additional resources About remote health monitoring 3.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install coreos print-stream-json",
"\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"",
"platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=nutanix --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"0000_26_cloud-controller-manager-operator_18_credentialsrequest-nutanix.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc apply -f ./oc-mirror-workspace/results-<id>/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource --all-namespaces"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned |
Chapter 1. Support overview | Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. Diagnostic data : Use the redhat-support-tool command to gather(?) diagnostic data about your cluster. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. Logging issues : A cluster administrator can follow the procedures in the "Support" and "Troubleshooting logging" sections to resolve logging issues: Viewing the status of the Red Hat OpenShift Logging Operator Viewing the status of logging components Troubleshooting logging alerts Collecting information about your logging environment by using the oc adm must-gather command OpenShift CLI ( oc ) issues : Investigate OpenShift CLI ( oc ) issues by increasing the log level. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/support/support-overview |
Chapter 1. Data Grid 8 upgrade notes | Chapter 1. Data Grid 8 upgrade notes Review the details in this section before upgrading from one Data Grid 8 version to another. 1.1. Upgrading to Data Grid 8.4 Read the following information to ensure a successful upgrade from versions of Data Grid 8 to 8.4: Hot Rod client defaults Data Grid 8.4.6 introduced changes to the properties of the Hot Rod client. infinispan.client.hotrod.ssl_hostname_validation A new property, infinispan.client.hotrod.ssl_hostname_validation with a default value of true . This property enables TLS hostname validation based on RFC 2818 rules. Additionally, setting the infinispan.client.hotrod.sni_host_name is now required when hostname validation is enabled. Table 1.1. Default property changes Property Data Grid 8.4 versions infinispan.client.hotrod.connect_timeout 2000 ms / 2 seconds 60000 ms / 60 seconds infinispan.client.hotrod.socket_timeout 2000 ms / 2 seconds 60000 ms / 60 seconds infinispan.client.hotrod.max_retries 3 10 infinispan.client.hotrod.min_evictable_idle_time 180000 ms / 3 minutes 1800000 ms / 30 minutes Improved metrics naming for JGroups and cross-site metrics In Data Grid 8.4.4, you can enable the name-as-tags property for JGroups metrics and cross-site metrics. Enabling name-as-tags simplifies metrics, displaying cluster and site names as tags rather than including them in metric names. When you set name-as-tags to false , metrics are named based on the channel, resulting in multiple metrics for the same purpose: When you set name-as-tags to true , metrics are simplified, and cluster and site names appear as tags: In addition to simplified metrics, when you change the cluster name and site name, there is no need to update Grafana dashboards, as the metric names remain consistent. Migrating from Java 8 As of Data Grid 8.4, Red Hat supports Java 11 and Java 17 for Data Grid Server installations, Hot Rod Java clients, and when using Data Grid for embedded caches in custom applications. Data Grid users must upgrade their applications at least to Java 11. Support for Java 8 was deprecated in Data Grid 8.2 and removed in Data Grid 8.4. Embedded caches Red Hat supports Java 11 and Java 17 when using Data Grid for embedded caches in custom applications. Data Grid users must upgrade their applications at least to Java 11. Remote caches Red Hat supports Java 11 and Java 17 for Data Grid Server and Hot Rod Java clients. Hot Rod Java clients running in applications that require Java 8 can continue using older versions of client libraries. Red Hat supports using older Hot Rod Java client versions in combination with the latest Data Grid Server version. However, if you continue using older version of the client you will miss fixes and enhancements. Important OpenJDK 17 has removed support for the Nashorn JavaScript engine, its APIs, and the jjs tool. If your Data Grid Server uses JavaScript to automate tasks, you must install the Nashorn JavaScript engine. Adding Jakarta EE dependencies As of version 8.4 Data Grid distributes Jakarta EE 9+ based jars. If your application requires Jakarta specific dependencies, append the artifacts with -jakarta , for example: pom.xml <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-client-hotrod-jakarta</artifactId> </dependency> Updating store properties configuration In Data Grid 8.4, store provided properties no longer override store explicit configuration. The following example illustrates how you should update your configuration. This example sets the read only configuration of the store to true : example.xml <persistence> <file-store> <index path="testCache/index" /> <data path="testCache/data" /> <property name="readOnly">true</property> </file-store> </persistence> In the updated example bellow, the read-only property is provided to the store itself and does not affect the store explicit configuration. example.xml <persistence> <file-store read-only="true"> <index path="testCache/index" /> <data path="testCache/data" /> </file-store> </persistence> Additional resources Migrating to Data Grid 8 Deprecated features and functionality In Data Grid 8.4.3, the following features and functionality are deprecated and planned to be removed in future Data Grid releases. Red Hat will provide support for these features and functionalities during the current release lifecycle, but they will no longer receive enhancements and will be eventually removed. It is recommended to transition to alternative solutions to ensure future compatibility. Support for Java 11 Support for Java 11 is deprecated and planned to be removed in Data Grid version 8.5. Users of Data Grid 8.5 must upgrade their applications at least to Java 17. You can continue using older Hot Rod Java client versions in combination with the latest Data Grid Server version. However, if you continue using older version of the client you will miss fixes and enhancements. Support for Java EE dependencies Support for Java EE dependencies is deprecated and planned to be removed in Data Grid version 8.5. Transition to Jakarta EE dependencies and APIs to align with the evolving Java enterprise ecosystem. Support for Spring 5.x and Spring Boot 2.x Support for Spring Boot 2.x and Spring 5.x is deprecated and planned to be removed in version Data Grid 8.5. Migrate to newer versions of Spring Boot and Spring framework for compatibility with future Data Grid releases. Support for JCache Support for JCache (JSR 107) is deprecated and planned to be removed in Data Grid version 8.5. As an alternative use other caching API developments in the Jakarta EE ecosystem. Deprecation of Data Grid modules for Red Hat JBoss EAP The Data Grid modules for Red Hat JBoss EAP applications that were distributed as a part of the Data Grid release are deprecated and planned to be removed in Data Grid version 8.5. JBoss EAP users can use the infinispan subsystem that is integrated within the JBoss EAP product release without the need to separately install Data Grid modules. Scattered cache mode Scattered cache mode is deprecated and planned to be removed in Data Grid version 8.5. As an alternative to scattered caches, you can use distributed caches instead. Adding caches to ignore list using the Data Grid Console or REST API The ability to add caches to the ignore list using the Data Grid Console or REST API, which allows temporarily excluding specific caches from client requests, is deprecated. This feature is planned to be removed in future releases. Cache service type The Cache service type is deprecated and planned to be removed in Data Grid 8.5. The Cache service type was designed to provide a convenient way to create a low-latency data store with minimal configuration. Use the DataGrid service type to automate complex operations such as cluster upgrades and data migration. Testing Data Grid Server on Windows The support for Data Grid Server on on Windows Server 2019 is deprecated and planned to be removed in Data Grid 8.5. However, the Data Grid team will continue testing C++ Hot Rod client with Windows Server 2019. Deprecation of the PrincipalRoleMapperContext interface org.infinispan.security.PrincipalRoleMapperContext was deprecated in Data Grid 8.4 and replaced by org.infinispan.security.AuthorizationMapperContext . Removal of the fetch-state store property The fetch-state attribute has been deprecated and removed in Data Grid 8.4 without any replacement. You can remove the attribute from your xml configuration. This change does not affect shared stores that have access to the same data. Local cache stores can use purge on startup to avoid loading stale entries from persistent storage. Upgrade from 8.1 at a minimum If you are upgrading from 8.0, you must first upgrade to 8.1. Persistent data in Data Grid 8.0 is not binary compatible with later versions. To overcome this incompatibility issue, Data Grid 8.2 and later automatically converts existing persistent cache stores from Data Grid 8.1 at cluster startup. However, Data Grid does not convert cache stores from Data Grid 8.0. | [
"TYPE vendor_jgroups_xsite_frag4_get_number_of_sent_fragments gauge HELP vendor_jgroups_xsite_frag4_get_number_of_sent_fragments Number of sent fragments vendor_jgroups_xsite_frag4_get_number_of_sent_fragments{cluster=\"xsite\",node=\"...\"} 0.0 TYPE vendor_jgroups_cluster_frag4_get_number_of_sent_fragments gauge HELP vendor_jgroups_cluster_frag4_get_number_of_sent_fragments Number of sent fragments vendor_jgroups_cluster_frag4_get_number_of_sent_fragments{cluster=\"cluster\",node=\"...\"} 2.0",
"TYPE vendor_jgroups_frag4_get_number_of_sent_fragments gauge HELP vendor_jgroups_frag4_get_number_of_sent_fragments Number of sent fragments vendor_jgroups_frag4_get_number_of_sent_fragments{cache_manager=\"default\",cluster=\"xsite\",node=\"...\"} 0.0 vendor_jgroups_frag4_get_number_of_sent_fragments{cache_manager=\"default\",cluster=\"cluster\",node=\"...\"} 2.0",
"<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-client-hotrod-jakarta</artifactId> </dependency>",
"<persistence> <file-store> <index path=\"testCache/index\" /> <data path=\"testCache/data\" /> <property name=\"readOnly\">true</property> </file-store> </persistence>",
"<persistence> <file-store read-only=\"true\"> <index path=\"testCache/index\" /> <data path=\"testCache/data\" /> </file-store> </persistence>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/upgrading_data_grid/upgrade-notes |
probe::tcpmib.CurrEstab | probe::tcpmib.CurrEstab Name probe::tcpmib.CurrEstab - Update the count of open sockets Synopsis tcpmib.CurrEstab Values sk pointer to the struct sock being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function tcpmib_filter_key . If the packet passes the filter is is counted in the global CurrEstab (equivalent to SNMP's MIB TCP_MIB_CURRESTAB) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcpmib-currestab |
3.15. Software Collection SELinux Support | 3.15. Software Collection SELinux Support Because Software Collections are designed to install the Software Collection packages in an alternate directory, set up the necessary SELinux labels so that SELinux is aware of the alternate directory. If the file system hierarchy of your Software Collection package imitates the file system hierarchy of the corresponding conventional package, you can run the semanage fcontext and restorecon commands to set up the SELinux labels. For example, if the /opt/provider/software_collection_1/root/usr/ directory in your Software Collection package imitates the /usr/ directory of your conventional package, set up the SELinux labels as follows: semanage fcontext -a -e /usr /opt/provider/software_collection_1/root/usr restorecon -R -v /opt/provider/software_collection_1/root/usr The commands above ensure that all directories and files in the /opt/provider/software_collection_1/root/usr/ directory are labeled by SELinux as if they were located in the /usr/ directory. 3.15.1. SELinux Support in Red Hat Enterprise Linux 7 When packaging a Software Collection for Red Hat Enterprise Linux 7, add the following commands to the %post section in the Software Collection metapackage to set up the SELinux labels: semanage fcontext -a -e /usr /opt/provider/software_collection_1/root/usr restorecon -R -v /opt/provider/software_collection_1/root/usr selinuxenabled && load_policy || : The last command ensures that the newly created SELinux policy is properly loaded, and that the files installed by a package in the Software Collection are created with the correct SELinux context. By using this command in the metapackage, you do not need to include the restorecon command in all packages in the Software Collection. Note that the semanage fcontext command is provided by the policycoreutils-python package, therefore it is important that you include policycoreutils-python in Requires for the Software Collection metapackage. Note The SELinux aspect of starting services has changed significantly in Red Hat Enterprise Linux 7. Most importantly, using the scl enable ... wrapper in a systemd service file will cause the service to be run as an unconfined process using the unconfined_service_t context. As this context has no transition rules by design, the service will not be able to transition into the target SELinux context indicated by the SELinux policy, which means scl enable ... cannot be used on Red Hat Enterprise Linux 7 if the service being started is supposed to be confined using SELinux. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-Software_Collection_SELinux_Support |
Chapter 12. Network Time Protocol | Chapter 12. Network Time Protocol You need to ensure that systems within your Red Hat OpenStack Platform cluster have accurate and consistent timestamps between systems. Red Hat OpenStack Platform on Red Hat Enterprise Linux 8 supports Chrony for time management. For more information, see Using the Chrony suite to configure NTP . 12.1. Why consistent time is important Consistent time throughout your organization is important for both operational and security needs: Identifying a security event Consistent timekeeping helps you correlate timestamps for events on affected systems so that you can understand the sequence of events. Authentication and security systems Security systems can be sensitive to time skew, for example: A kerberos-based authentication system might refuse to authenticate clients that are affected by seconds of clock skew. Transport layer security (TLS) certificates depend on a valid source of time. A client to server TLS connection fails if the difference between client and server system times exceeds the Valid From date range. Red Hat OpenStack Platform services Some core OpenStack services are especially dependent on accurate timekeeping, including High Availability (HA) and Ceph. 12.2. NTP design Network time protocol (NTP) is organized in a hierarchical design. Each layer is called a stratum. At the top of the hierarchy are stratum 0 devices such as atomic clocks. In the NTP hierarchy, stratum 0 devices provide reference for publicly available stratum 1 and stratum 2 NTP time servers. Do not connect your data center clients directly to publicly available NTP stratum 1 or 2 servers. The number of direct connections would put unnecessary strain on the public NTP resources. Instead, allocate a dedicated time server in your data center, and connect the clients to that dedicated server. Configure instances to receive time from your dedicated time servers, not the host on which they reside. Note Service containers running within the Red Hat OpenStack Platform environment still receive time from the host on which they reside. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly_network-time-protocol_security_and_hardening |
Java SDK Guide | Java SDK Guide Red Hat Virtualization 4.3 Using the Red Hat Virtualization Java SDK Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This guide describes how to install and work with version 3 and version 4 of the Red Hat Virtualization Java software development kit. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/java_sdk_guide/index |
4.69. gdb | 4.69. gdb 4.69.1. RHBA-2011:1699 - gdb bug fix and enhancement update Updated gdb packages that fix multiple bugs and add three enhancements are now available for Red Hat Enterprise Linux 6. The GNU Debugger (GDB) allows users to debug programs written in C, C++, and other languages by executing them in a controlled fashion and then printing out their data. Bug Fixes BZ# 669432 Prior to this update, GDB could stop on error when trying to access the libpthread shared library before the library was relocated. Fixed GDB lets the relocations to be resolved first, making such program debuggable. BZ# 669434 The Intel Fortran Compiler records certain debug info symbols in uppercase but the gfortran compiler writes case-insensitive symbols in lowercase. As a result, GDB could terminate unexpectedly when accessing uppercase characters in the debug information from the Intel Fortran Compiler. With this update, GDB properly implements case insensitivity and ignores the symbols case in the symbol files. BZ# 692386 When the user selected the "-statistics" option with a negative number as a result, GDB printed the minus sign twice. This has been fixed and GDB now displays negative numbers with one minus sign only. BZ# 697900 On the PowerPC and the IBM System z architectures, GDB displayed only LWP (light-weight process) identifiers which matched the Linux TID (Thread Identifier) values for the threads found in the core file. GDB has been fixed to initialize the libthread_db threads debugging library when accessing the core file. GDB now correctly displays the pthread_t identifier in addition to the LWP identifier on the aforementioned architectures. BZ# 702427 Structure field offsets above 65535 described by the DWARF DW_AT_data_member_location attribute were improperly interpreted as a 0 value. GDB has been modified and can now handle also large structures and their fields. BZ# 704010 The difference between the very closely related "ptype" and "whatis" commands was not clearly defined in the gdb info manual. Detailed differences between these commands have been described in the manual. BZ# 712117 Prior to this update, the "info sources" subcommand printed only relative paths to the source files. GDB has been modified to correctly display the full path name to the source file. BZ# 730475 Modifying a string in the executable using the "-write" command line option could fail with an error if the executable was not running. With this update, GDB can modify executables even before they are started. Enhancements BZ# 696890 With this update, Float16 instructions on future Intel processors are now supported. BZ# 698001 Debugged programs can open many shared libraries on demand at runtime using the dlopen() function. Prior to this update, tracking shared libraries that were in use by the debugged program could lead to overhead. The debugging performance of GDB has been improved: the overhead is now lower if applications load many objects. BZ# 718141 Prior to this update, GDB did not handle DWARF 4 .debug_types data correctly. Now, GDB can correctly process data in the DWARF 4 format. All GDB users are advised to upgrade to these updated gdb packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gdb |
4.3. Configuring Kerberos (with LDAP or NIS) Using authconfig | 4.3. Configuring Kerberos (with LDAP or NIS) Using authconfig Both LDAP and NIS authentication stores support Kerberos authentication methods. Using Kerberos has a couple of benefits: It uses a security layer for communication while still allowing connections over standard ports. It automatically uses credentials caching with SSSD, which allows offline logins. Note Using Kerberos authentication requires the krb5-libs and krb5-workstation packages. 4.3.1. Configuring Kerberos Authentication from the UI The Kerberos password option from the Authentication Method drop-down menu automatically opens the fields required to connect to the Kerberos realm. Figure 4.2. Kerberos Fields Realm gives the name for the realm for the Kerberos server. The realm is the network that uses Kerberos, composed of one or more key distribution centers (KDC) and a potentially large number of clients. KDCs gives a comma-separated list of servers that issue Kerberos tickets. Admin Servers gives a list of administration servers running the kadmind process in the realm. Optionally, use DNS to resolve server host name and to find additional KDCs within the realm. 4.3.2. Configuring Kerberos Authentication from the Command Line Both LDAP and NIS allow Kerberos authentication to be used in place of their native authentication mechanisms. At a minimum, using Kerberos authentication requires specifying the realm, the KDC, and the administrative server. There are also options to use DNS to resolve client names and to find additional admin servers. | [
"authconfig NIS or LDAP options --enablekrb5 --krb5realm EXAMPLE --krb5kdc kdc.example.com:88,server.example.com:88 --krb5adminserver server.example.com:749 --enablekrb5kdcdns --enablekrb5realmdns --update"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/authconfig-kerberos |
Chapter 2. Understanding ephemeral storage | Chapter 2. Understanding ephemeral storage 2.1. Overview In addition to persistent storage, pods and containers can require ephemeral or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Pods use ephemeral local storage for scratch space, caching, and logs. Issues related to the lack of local storage accounting and isolation include the following: Pods do not know how much local storage is available to them. Pods cannot request guaranteed local storage. Local storage is a best effort resource. Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage has been reclaimed. Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node, in addition to other uses by the system, the container runtime, and OpenShift Container Platform. The ephemeral storage framework allows pods to specify their transient local storage needs. It also allows OpenShift Container Platform to schedule pods where appropriate, and to protect the node against excessive use of local storage. While the ephemeral storage framework allows administrators and developers to better manage this local storage, it does not provide any promises related to I/O throughput and latency. 2.2. Types of ephemeral storage Ephemeral local storage is always made available in the primary partition. There are two basic ways of creating the primary partition: root and runtime. Root This partition holds the kubelet root directory, /var/lib/kubelet/ by default, and /var/log/ directory. This partition can be shared between user pods, the OS, and Kubernetes system daemons. This partition can be consumed by pods through EmptyDir volumes, container logs, image layers, and container-writable layers. Kubelet manages shared access and isolation of this partition. This partition is ephemeral, and applications cannot expect any performance SLAs, such as disk IOPS, from this partition. Runtime This is an optional partition that runtimes can use for overlay file systems. OpenShift Container Platform attempts to identify and provide shared access along with isolation to this partition. Container image layers and writable layers are stored here. If the runtime partition exists, the root partition does not hold any image layer or other writable storage. 2.3. Ephemeral storage management Cluster administrators can manage ephemeral storage within a project by setting quotas that define the limit ranges and number of requests for ephemeral storage across all pods in a non-terminal state. Developers can also set requests and limits on this compute resource at the pod and container level. 2.4. Monitoring ephemeral storage You can use /bin/df as a tool to monitor ephemeral storage usage on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers . The available space for only /var/lib/kubelet is shown when you use the df command if /var/lib/containers is placed on a separate disk by the cluster administrator. To show the human-readable values of used and available space in /var/lib , enter the following command: USD df -h /var/lib The output shows the ephemeral storage usage in /var/lib : Example output Filesystem Size Used Avail Use% Mounted on /dev/sda1 69G 32G 34G 49% / | [
"df -h /var/lib",
"Filesystem Size Used Avail Use% Mounted on /dev/sda1 69G 32G 34G 49% /"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/storage/understanding-ephemeral-storage |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The AMQ OpenWire JMS examples require a running message broker with a queue named exampleQueue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named exampleQueue . USD <broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2020-10-08 11:29:04 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_openwire_jms_client/using_the_broker_with_the_examples |
Chapter 2. Viewing, starting and stopping the Identity Management services | Chapter 2. Viewing, starting and stopping the Identity Management services Identity Management (IdM) servers are Red Hat Enterprise Linux systems that work as domain controllers (DCs). A number of different services are running on IdM servers, most notably the Directory Server, Certificate Authority (CA), DNS, and Kerberos. 2.1. The IdM services There are many different services that can be installed and run on the IdM servers and clients. List of services hosted by IdM servers Most of the following services are not strictly required to be installed on the IdM server. For example, you can install services such as a certificate authority (CA) or DNS server on an external server outside the IdM domain. Kerberos the krb5kdc and kadmin services IdM uses the Kerberos protocol to support single sign-on. With Kerberos, users only need to present the correct username and password once and can access IdM services without the system prompting for credentials again. Kerberos is divided into two parts: The krb5kdc service is the Kerberos Authentication service and Key Distribution Center (KDC) daemon. The kadmin service is the Kerberos database administration program. For information about how to authenticate using Kerberos in IdM, see Logging in to Identity Management from the command line and Logging in to IdM in the Web UI: Using a Kerberos ticket . LDAP directory server the dirsrv service The IdM LDAP directory server instance stores all IdM information, such as information related to Kerberos, user accounts, host entries, services, policies, DNS, and others. The LDAP directory server instance is based on the same technology as Red Hat Directory Server . However, it is tuned to IdM-specific tasks. Certificate Authority the pki-tomcatd service The integrated certificate authority (CA) is based on the same technology as Red Hat Certificate System . pki is the command-line interface for accessing Certificate System services. You can also install the server without the integrated CA if you create and provide all required certificates independently. For more information, see Planning your CA services . Domain Name System (DNS) the named service IdM uses DNS for dynamic service discovery. The IdM client installation utility can use information from DNS to automatically configure the client machine. After the client is enrolled in the IdM domain, it uses DNS to locate IdM servers and services within the domain. The BIND (Berkeley Internet Name Domain) implementation of the DNS (Domain Name System) protocols in Red Hat Enterprise Linux includes the named DNS server. named-pkcs11 is a version of the BIND DNS server built with native support for the PKCS#11 cryptographic standard. For information, see Planning your DNS services and host names . Apache HTTP Server the httpd service The Apache HTTP web server provides the IdM Web UI, and also manages communication between the Certificate Authority and other IdM services. Samba / Winbind smb and winbind services Samba implements the Server Message Block (SMB) protocol, also known as the Common Internet File System (CIFS) protocol, in Red Hat Enterprise Linux. Via the smb service, the SMB protocol enables you to access resources on a server, such as file shares and shared printers. If you have configured a Trust with an Active Directory (AD) environment, the`Winbind` service manages communication between IdM servers and AD servers. One-time password (OTP) authentication the ipa-otpd services One-time passwords (OTP) are passwords that are generated by an authentication token for only one session, as part of two-factor authentication. OTP authentication is implemented in Red Hat Enterprise Linux via the ipa-otpd service. For more information, see Logging in to the Identity Management Web UI using one time passwords . OpenDNSSEC the ipa-dnskeysyncd service OpenDNSSEC is a DNS manager that automates the process of keeping track of DNS security extensions (DNSSEC) keys and the signing of zones. The ipa-dnskeysyncd service manages synchronization between the IdM Directory Server and OpenDNSSEC. List of services hosted by IdM clients System Security Services Daemon : the sssd service The System Security Services Daemon (SSSD) is the client-side application that manages user authentication and caching credentials. Caching enables the local system to continue normal authentication operations if the IdM server becomes unavailable or if the client goes offline. For more information, see Understanding SSSD and its benefits . Certmonger : the certmonger service The certmonger service monitors and renews the certificates on the client. It can request new certificates for the services on the system. For more information, see Obtaining an IdM certificate for a service using certmonger . 2.2. Viewing the status of IdM services To view the status of the IdM services that are configured on your IdM server, run the ipactl status command: The output of the ipactl status command on your server depends on your IdM configuration. For example, if an IdM deployment does not include a DNS server, the named service is not present in the list. Note You cannot use the IdM web UI to view the status of all the IdM services running on a particular IdM server. Kerberized services running on different servers can be viewed in the Identity Services tab of the IdM web UI. You can start or stop the entire server, or an individual service only. To start, stop, or restart the entire IdM server, see: Starting and stopping the entire Identity Management server To start, stop, or restart an individual IdM service, see: Starting and stopping an individual Identity Management service To display the version of IdM software, see: Methods for displaying IdM software version 2.3. Starting and stopping the entire Identity Management server Use the ipa systemd service to stop, start, or restart the entire IdM server along with all the installed services. Using the systemctl utility to control the ipa systemd service ensures all services are stopped, started, or restarted in the appropriate order. The ipa systemd service also upgrades the RHEL IdM configuration before starting the IdM services, and it uses the proper SELinux contexts when administrating with IdM services. You do not need to have a valid Kerberos ticket to run the systemctl ipa commands. ipa systemd service commands To start the entire IdM server: To stop the entire IdM server: To restart the entire IdM server: To show the status of all the services that make up IdM, use the ipactl utility: Important Do not directly use the ipactl utility to start, stop, or restart IdM services. Use the systemctl ipa commands instead, which call the ipactl utility in a predictable environment. You cannot use the IdM web UI to perform the ipactl commands. 2.4. Starting and stopping an individual Identity Management service Changing IdM configuration files manually is generally not recommended. However, certain situations require that an administrator performs a manual configuration of specific services. In such situations, use the systemctl utility to stop, start, or restart an individual IdM service. For example, use systemctl after customizing the Directory Server behavior, without modifying the other IdM services: Also, when initially deploying an IdM trust with Active Directory, modify the /etc/sssd/sssd.conf file, adding: Specific parameters to tune the timeout configuration options in an environment where remote servers have a high latency Specific parameters to tune the Active Directory site affinity Overrides for certain configuration options that are not provided by the global IdM settings To apply the changes you have made in the /etc/sssd/sssd.conf file: Running systemctl restart sssd.service is required because the System Security Services Daemon (SSSD) does not automatically re-read or re-apply its configuration. Note that for changes that affect IdM identity ranges, a complete server reboot is recommended. Important To restart multiple IdM domain services, always use systemctl restart ipa . Because of dependencies between the services installed with the IdM server, the order in which they are started and stopped is critical. The ipa systemd service ensures that the services are started and stopped in the appropriate order. Useful systemctl commands To start a particular IdM service: To stop a particular IdM service: To restart a particular IdM service: To view the status of a particular IdM service: Important You cannot use the IdM web UI to start or stop the individual services running on IdM servers. You can only use the web UI to modify the settings of a Kerberized service by navigating to Identity Services and selecting the service. Additional resources Starting and stopping the entire Identity Management server 2.5. Methods for displaying IdM software version You can display the IdM version number with: The IdM WebUI ipa commands rpm commands Displaying version through the WebUI In the IdM WebUI, the software version can be displayed by choosing About from the username menu at the upper-right. Displaying version with ipa commands From the command line, use the ipa --version command. Displaying version with rpm commands If IdM services are not operating properly, you can use the rpm utility to determine the version number of the ipa-server package that is currently installed. | [
"ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING named Service: RUNNING httpd Service: RUNNING pki-tomcatd Service: RUNNING smb Service: RUNNING winbind Service: RUNNING ipa-otpd Service: RUNNING ipa-dnskeysyncd Service: RUNNING ipa: INFO: The ipactl command was successful",
"systemctl start ipa",
"systemctl stop ipa",
"systemctl restart ipa",
"ipactl status",
"systemctl restart [email protected]",
"systemctl restart sssd.service",
"systemctl start name .service",
"systemctl stop name .service",
"systemctl restart name .service",
"systemctl status name .service",
"ipa --version VERSION: 4.8.0 , API_VERSION: 2.233",
"rpm -q ipa-server ipa-server-4.8.0-11 .module+el8.1.0+4247+9f3fd721.x86_64"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/viewing-starting-and-stopping-the-ipa-server_configuring-and-managing-idm |
Chapter 6. Configuring automation controller on Red Hat OpenShift Container Platform web console | Chapter 6. Configuring automation controller on Red Hat OpenShift Container Platform web console You can use these instructions to configure the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database. Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface. Note When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information. 6.1. Prerequisites You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub. For automation controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured. For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object. 6.1.1. Configuring your controller image pull policy Use this procedure to configure the image pull policy on your automation controller. Procedure Log in to Red Hat OpenShift Container Platform. Go to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller tab. For new instances, click Create AutomationController . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationController . Click advanced Configuration . Under Image Pull Policy , click on the radio button to select Always Never IfNotPresent To display the option under Image Pull Secrets , click the arrow. Click + beside Add Image Pull Secret and enter a value. To display fields under the Web container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the Task container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the Redis container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the PostgreSQL container resource requirements (when using a managed instance) * drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . Under Replicas, enter the number of instance replicas. Under Remove used secrets on instance removal , select true or false . The default is false. Under Preload instance with data upon creation , select true or false . The default is true. 6.1.2. Configuring your controller LDAP security You can configure your LDAP SSL configuration for automation controller through any of the following options: The automation controller user interface. The platform gateway user interface. See the Configuring LDAP authentication section of the Access management and authentication guide for additional steps. The following procedure steps. Procedure If you do not have a ldap_cacert_secret , you can create one with the following command: USD oc create secret generic <resourcename>-custom-certs \ --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \ 1 1 Modify this to point to where your CA cert is stored. This will create a secret that looks like this: USD oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: AutomationController type: Opaque 1 Automation controller looks for the data field ldap-ca.crt in the specified secret when using the ldap_cacert_secret . Under LDAP Certificate Authority Trust Bundle click the drop-down menu and select your ldap_cacert_secret . Under LDAP Password Secret , click the drop-down menu and select a secret. Under EE Images Pull Credentials Secret , click the drop-down menu and select a secret. Under Bundle Cacert Secret , click the drop-down menu and select a secret. Under Service Type , click the drop-down menu and select ClusterIP LoadBalancer NodePort 6.1.3. Configuring your automation controller operator route options The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration . Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller tab. For new instances, click Create AutomationController . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationController . Click Advanced configuration . Under Ingress type , click the drop-down menu and select Route . Under Route DNS host , enter a common host name that the route answers to. Under Route TLS termination mechanism , click the drop-down menu and select Edge or Passthrough . For most instances Edge should be selected. Under Route TLS credential secret , click the drop-down menu and select a secret from the list. Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider. 6.1.4. Configuring the ingress type for your automation controller operator The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration . Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller tab. For new instances, click Create AutomationController . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationController . Click Advanced configuration . Under Ingress type , click the drop-down menu and select Ingress . Under Ingress annotations , enter any annotations to add to the ingress. Under Ingress TLS secret , click the drop-down menu and select a secret from the list. After you have configured your automation controller operator, click Create at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes. You can view the progress by navigating to Workloads Pods and locating the newly created instance. Verification Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running: Operator manager controllers Automation controller Automation hub Event-Driven Ansible (EDA) The operator manager controllers for each of the three operators, include the following: automation-controller-operator-controller-manager automation-hub-operator-controller-manager resource-operator-controller-manager aap-gateway-operator-controller-manager ansible-lightspeed-operator-controller-manager eda-server-operator-controller-manager After deploying automation controller, you can see the addition of the following pods: controller controller-postgres controller-web controller-task After deploying automation hub, you can see the addition of the following pods: hub-api hub-content hub-postgres hub-redis hub-worker After deploying EDA, you can see the addition of the following pods: eda-activation-worker da-api eda-default-worker eda-event-stream eda-scheduler Note A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod. 6.2. Configuring an external database for automation controller on Red Hat Ansible Automation Platform Operator For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command. By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates. Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations. Note The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator. Prerequisite The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. Note Ansible Automation Platform 2.5 supports PostgreSQL 15. Procedure The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec. Create a postgres_configuration_secret YAML file, following the template below: apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque 1 Namespace to create the secret in. This should be the same namespace you want to deploy to. 2 The resolvable hostname for your database node. 3 External port defaults to 5432 . 4 Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. 5 The variable sslmode is valid for external databases only. The allowed values are: prefer , disable , allow , require , verify-ca , and verify-full . Apply external-postgres-configuration-secret.yml to your cluster using the oc create command. USD oc create -f external-postgres-configuration-secret.yml When creating your AutomationController custom resource object, specify the secret on your spec, following the example below: apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration 6.3. Finding and deleting PVCs A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them. Procedure List the existing PVCs in your deployment namespace: oc get pvc -n <namespace> Identify the PVC associated with your deployment by comparing the old deployment name and the PVC name. Delete the old PVC: oc delete pvc -n <namespace> <pvc-name> 6.4. Additional resources For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide. | [
"oc create secret generic <resourcename>-custom-certs --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \\ 1",
"oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: AutomationController type: Opaque",
"apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque",
"oc create -f external-postgres-configuration-secret.yml",
"apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration",
"get pvc -n <namespace>",
"delete pvc -n <namespace> <pvc-name>"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/installing-controller-operator |
Chapter 1. Integrating communications applications with the Hybrid Cloud Console | Chapter 1. Integrating communications applications with the Hybrid Cloud Console Receive event notifications in your preferred communications application by connecting the Hybrid Cloud Console with Microsoft Teams, Google Chat, or Slack. 1.1. Integrating Microsoft Teams with the Hybrid Cloud Console You can configure the Red Hat Hybrid Cloud Console to send event notifications to all users on a new or existing channel in Microsoft Teams. The Microsoft Teams integration supports events from all services in the Hybrid Cloud Console. The Microsoft Teams integration uses incoming webhooks to receive event data. Contacting support If you have any issues with integrating the Hybrid Cloud Console with Microsoft Teams, contact Red Hat for support. You can open a Red Hat support case directly from the Hybrid Cloud Console by clicking Help ( ? icon) > Open a support case , or view more options from ? > Support options . Microsoft will not provide troubleshooting. The Hybrid Cloud Console integration with Microsoft Teams is fully supported by Red Hat. 1.1.1. Configuring Microsoft Teams for integration with the Hybrid Cloud Console You can use incoming webhooks to configure Microsoft Teams to receive event notifications from the Red Hat Hybrid Cloud Console or a third-party application. Prerequisites You have admin permissions for Microsoft Teams. You have Organization Administrator or Notifications administrator permissions for the Hybrid Cloud Console. Procedure Create a new channel in Microsoft Teams or select an existing channel. Navigate to Apps and search for the Incoming Webhook application. Select the Incoming Webhook application and click Add to a team . Select the team or channel name and click Set up a connector . Enter a name for the incoming webhook (for example, Red Hat Notifications ). This name appears on all notifications that the Microsoft Teams channel receives from the Red Hat Hybrid Cloud Console through this incoming webhook. Optional: Upload an image to associate with the name of the incoming webhook. This image appears on all notifications that the Microsoft Teams channel receives from the Hybrid Cloud Console through this incoming webhook. Click Create to complete creation and display the webhook URL. Copy the URL to your clipboard. You need the URL to configure notifications in the Hybrid Cloud Console. Click Done . The Microsoft Teams page displays the channel and the incoming webhook. In the Hybrid Cloud Console, navigate to Settings > Integrations . Click the Communications tab. Click Add integration . Select Microsoft Office Teams as the integration type, and then click . In the Integration name field, enter a name for your integration (for example, console-teams ). Paste the incoming webhook URL that you copied from Microsoft Teams into the Endpoint URL field. Click . Optional: Associate events with the integration. Doing this automatically creates a behavior group. Note You can skip this step and associate the event types later. Select a product family, for example OpenShift , Red Hat Enterprise Linux , or Console . Select the event types you would like your integration to react to. To enable the integration, review the integration details and click Submit . Refresh the Integrations page to show the Microsoft Teams integration in the Integrations > Communications list. Under Last connection attempt , the status is Ready to show the connection can accept notifications from the Hybrid Cloud Console. Verification Create a test notification to confirm you have correctly connected Microsoft Teams to the Hybrid Cloud Console: to your Microsoft Teams integration on the Integrations > Communications page, click the options icon (...) and click Test . In the Integration Test screen, enter a message and click Send . If you leave the field empty, the Hybrid Cloud Console sends a default message. Open your Microsoft Teams channel and check for the message sent from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Notifications > Event Log and check that the Integration: Microsoft Teams event is listed with a green label. Additional resources For more information about setting up Notifications administrator permissions, see Configure User Access to manage notifications in the notifications documentation. 1.1.2. Creating the behavior group for the Microsoft Teams integration A behavior group defines which notifications will be sent to external services such as Microsoft Teams when a specific event is received by the notifications service. You can link events from any Red Hat Hybrid Cloud Console service to your behavior group. For more information about behavior groups, see Configuring Hybrid Cloud Console notification behavior groups . Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. The Microsoft Teams integration is configured. For information about configuring Microsoft Teams integration, see Section 1.1.1, "Configuring Microsoft Teams for integration with the Hybrid Cloud Console" . Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications . Under Notifications , select Configure Events . Select the application bundle tab you want to configure event notification behavior for: Red Hat Enterprise Linux , Console , or OpenShift . Click the Behavior Groups tab. Click Create new group to open the Create behavior group wizard. Type a name for the behavior group and click . In the Actions and Recipients step, select Integration: Microsoft Teams from the Actions drop-down list. From the Recipient drop-down list, select the name of the integration you created (for example, console-teams ) and click . In the Associate event types step, select one or more events for which you want to send notifications (for example, Policies: Policy triggered ) and click . Review your behavior group settings and click Finish . The new behavior group appears on the Notifications > Configure Events page in the Behavior Groups tab. Verification Create an event that will trigger a Hybrid Cloud Console notification. For example, run insights-client on a system that will trigger a policy event. Wait a few minutes, and then navigate to Microsoft Teams. Select the channel that you configured from the left menu. If the setup process succeeded, the page displays a notification from the Hybrid Cloud Console. The notification contains the name of the host that triggered the event and a link to that host, as well as the number of events and a link that opens the corresponding Hybrid Cloud Console service. In the Hybrid Cloud Console, go to Settings > Notifications > Event Log and check for an event that shows the label Integration: Microsoft Teams . If the label is green, the notification succeeded. If the label is red, verify that the incoming webhook connector was properly created in Microsoft Teams, and that the correct incoming webhook URL is added in the Hybrid Cloud Console integration configuration. Note See Troubleshooting notification failures in the notifications documentation for more details. 1.1.3. Additional resources For information about troubleshooting your Microsoft Teams integration, see Troubleshooting Hybrid Cloud Console integrations . For more information about webhooks, see Create an Incoming Webhook and Webhooks and Connectors in the Microsoft Teams documentation. 1.2. Integrating Google Chat with the Red Hat Hybrid Cloud Console You can configure the Red Hat Hybrid Cloud Console to send event notifications to a new or existing Google space in Google Chat. The Google Chat integration supports events from all Hybrid Cloud Console services. The integration with the Hybrid Cloud Console notifications service uses incoming webhooks to receive event data. Each Red Hat account configures how and who can receive these events, with the ability to perform actions depending on the event type. Contacting Support If you have any issues with the Hybrid Cloud Console integration with Google Chat, contact Red Hat for support. You can open a Red Hat support case directly from the Hybrid Cloud Console by clicking Help > Open a support case , or view more options from Help > Support options . Google will not provide troubleshooting. The Hybrid Cloud Console integration with Google Chat is fully supported by Red Hat. 1.2.1. Configuring incoming webhooks in Google Chat In Google spaces, create a new webhook to connect with the Hybrid Cloud Console. Prerequisites You have a new or existing Google space in Google Chat. Procedure In your Google space, click the arrow on the space name to open the dropdown menu: Select Apps & Integrations . Click Webhooks . Enter the following information in the Incoming webhooks dialog: Enter a name for the integration (for example, Engineering Google Chat ). Optional: To add an avatar for the notifications, enter a URL to an image. Click Save to generate the webhook URL. Copy the webhook URL to use for configuration in the Hybrid Cloud Console. Additional resources See Send messages to Google Chat with incoming webhooks in the Google Chat documentation for more detailed information about Google Chat configuration. 1.2.2. Configuring the Google Chat integration in the Red Hat Hybrid Cloud Console Create a new integration in the Hybrid Cloud Console using the webhook URL from Google Chat. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. You have a Google Chat incoming webhook. Procedure In the Hybrid Cloud Console, navigate to Settings > Integrations . Select the Communications tab. Click Add integration . Select Google Chat as the integration type, and then click . In the Integration name field, enter a name for your integration (for example, console-gchat ). Paste the incoming webhook URL that you copied from your Google space into the Endpoint URL field, and click . Optional: Associate events with the integration. Doing this automatically creates a behavior group. Note You can skip this step and associate the event types later. Select a product family, for example OpenShift , Red Hat Enterprise Linux , or Console . Select the event types you would like your integration to react to. To enable the integration, review the integration details and click Submit . Refresh the Integrations page to show the Google Chat integration in the Integrations > Communications list. Under Last connection attempt , the status is Ready to show the connection can accept notifications from the Hybrid Cloud Console. Verification Create a test notification to confirm you have successfully connected Google Chat to the Hybrid Cloud Console: to your Google Chat integration on the Integrations > Communications page, click the options icon (...) and click Test . In the Integration Test screen, enter a message and click Send . If you leave the field empty, the Hybrid Cloud Console sends a default message. Open your Google space and check for the message sent from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Notifications > Event Log and check that the Integration: Google Chat event is listed with a green label. Additional resources For more information about setting up Notifications administrator permissions, see Configure User Access to manage notifications in the notifications documentation. 1.2.3. Creating the behavior group for the Google Chat integration A behavior group defines which notifications will be sent to external services such as Google Chat when a specific event is received by the notifications service. You can link events from any Red Hat Hybrid Cloud Console service to your behavior group. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. You have configured the Google Chat integration. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications . Under Notifications , select Configure Events . Select the application bundle tab you want to configure event notification behavior for: Red Hat Enterprise Linux , Console , or OpenShift . Click the Behavior Groups tab. Click Create new group to open the Create behavior group wizard. Type a name for the behavior group and click . In the Actions and Recipients step, select Integration: Google Chat from the Actions drop-down list. From the Recipient drop-down list, select the name of the integration you created (for example, console-gchat ) and click . In the Associate event types step, select one or more events for which you want to send notifications (for example, Policies: Policy triggered ) and click . Review your behavior group settings and click Finish . The new behavior group is listed on the Notifications page. Verification Create an event that will trigger a Hybrid Cloud Console notification. For example, run insights-client on a system that will trigger a policy event. Wait a few minutes, and then navigate to Google Chat. In your Google Space, check for notifications from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Settings > Notifications > Event Log and check for an event that shows the label Integration: Google Chat . If the label is green, the notification succeeded. If the label is red, the integration might need to be adjusted. If the integration is not working as expected, verify that the incoming webhook connector was properly created in Google Chat, and that the correct incoming webhook URL is added in the Hybrid Cloud Console integration configuration. Note See Troubleshooting notification failures in the notifications documentation for more details. 1.2.4. Additional resources For information about troubleshooting your Google Chat integration, see Troubleshooting Hybrid Cloud Console integrations . See the Google Chat documentation about incoming webhooks for more detailed information about Google Chat configuration. For more information about behavior groups, see Configuring Hybrid Cloud Console notification behavior groups . 1.3. Integrating Slack with the Hybrid Cloud Console You can configure the Hybrid Cloud Console to send event notifications to a Slack channel or directly to a user. The Slack integration supports events from all Hybrid Cloud Console services. Note The Slack integration in this example is configured for Red Hat Enterprise Linux. The integration also works with Red Hat OpenShift and Hybrid Cloud Console events. The Slack integration uses incoming webhooks to receive event data. For more information about webhooks, see Sending messages using incoming webhooks in the Slack API documentation. Contacting support If you have any issues with the Hybrid Cloud Console integration with Slack, contact Red Hat for support. Slack will not provide troubleshooting. The Hybrid Cloud Console integration with Slack is fully supported by Red Hat. You can open a Red Hat support case directly from the Hybrid Cloud Console by clicking Help > Open a support case , or view more options from Help > Support options . 1.3.1. Configuring incoming webhooks in Slack To prepare Slack for integration with the Hybrid Cloud Console, you must configure incoming webhooks in Slack. Prerequisites You have owner or admin permissions to the Slack instance where you want to add incoming webhooks. You have App Manager permissions to add Slack apps to a channel. You have a Slack channel or user to receive notifications. Procedure Create a Slack app: Go to the Slack API web page and click the Create your Slack app button. This opens the Create an app dialog. Select From scratch to use the Slack configuration UI to create your app. Enter a name for your app and select the workspace where you want to receive notifications. Note If you see a message that administrator approval is required, you can request approval in the step. Click Create App to finish creating the Slack app. Enable incoming webhooks: Under the Features heading in the navigation panel, click Incoming Webhooks . Toggle the Activate Incoming Webhooks switch to On . Click the Request to Add New Webhook button. If required, enter a message to your administrators to grant access to your app and click Submit Request . A success message confirms you have configured this correctly. Create an incoming webhook: Under Settings in the navigation panel, click Install App . In the Install App section, click the Install to workspace button. Select the channel where you want your Slack app to post notifications, or select a user to send notifications to as direct messages. Click Allow to save changes. Optional: Configure how your Hybrid Cloud Console notifications appear in Slack: Under Settings in the navigation panel, click Basic Information . Scroll down to Display Information . Edit your app name, description, icon, and background color as desired. Click Save Changes . Copy the webhook URL: Under Features , click Incoming Webhooks . Click the Copy button to the webhook URL. You will use the URL to set up the integration in the Hybrid Cloud Console in Section 1.3.2, "Configuring the Slack integration in the Red Hat Hybrid Cloud Console" . Verification Open the Slack channel or user you selected during configuration, and check for a message confirming you have added the integration. Additional resources For information about webhooks in Slack, see Sending messages using incoming webhooks . For information about workflows, see Build a workflow: Create a workflow that starts outside of Slack . For information about managing app approvals, see Managing app approvals in Enterprise Grid workspaces . For general help with Slack, see the Slack Help Center . 1.3.2. Configuring the Slack integration in the Red Hat Hybrid Cloud Console After you have configured an incoming webhook in Slack, you can configure the Hybrid Cloud Console to send event notifications to the Slack channel or user you configured. Prerequisites You have Organization Administrator or Notifications administrator permissions for the Red Hat Hybrid Cloud Console. Procedure If necessary, go to the Slack API web page and copy the webhook URL that you configured. Note See Section 1.3.1, "Configuring incoming webhooks in Slack" for the steps to create a Slack webhook URL. In the Hybrid Cloud Console, navigate to Settings > Integrations . Select the Communications tab. Click Add integration . Select Slack as the integration type and click . Enter a name for the integration (for example, My Slack notifications ). Paste the Slack webhook URL that you copied from Slack into the Workspace URL field and click . Optional: Associate events with the integration. Doing this automatically creates a behavior group. Note You can skip this step and associate the event types later. Select a product family, for example OpenShift , Red Hat Enterprise Linux , or Console . Select the event types you want your integration to react to and click . To enable the integration, review the integration details and click Submit . Refresh the Integrations page to show the Slack integration in the Integrations > Communications list. Under Last connection attempt , the status is Ready to show the connection can accept notifications from the Hybrid Cloud Console. Verification Create a test notification to confirm you have successfully connected Slack to the Hybrid Cloud Console: to your Slack integration on the Integrations > Communications page, click the options icon (...) and click Test . In the Integration Test screen, enter a message and click Send . If you leave the field empty, the Hybrid Cloud Console sends a default message. Open the Slack channel you configured and check for the message sent from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Notifications > Event Log and check that the Integration: Slack event is listed with a green label. Additional resources For more information about setting up Notifications administrator permissions, see Configure User Access to manage notifications in the notifications documentation. 1.3.3. Creating the behavior group for the Slack integration A behavior group defines which notifications will be sent to external services such as Slack when a specific event is received by the notifications service. You can link events from any Red Hat Hybrid Cloud Console service to your behavior group. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. You have configured the Slack integration. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications . Under Notifications , select Configure Events . Select the application bundle tab you want to configure event notification behavior for: Red Hat Enterprise Linux , Console , or OpenShift . Click the Behavior Groups tab. Click Create new group to open the Create behavior group wizard. Enter a name for the behavior group and click . In the Actions and Recipients step, select Integration: Slack from the Actions drop-down list. From the Recipient drop-down list, select the name of the integration you created (for example, My Slack notifications ) and click . In the Associate event types step, select one or more events for which you want to send notifications (for example, Policies: Policy triggered ) and click . Review your behavior group settings and click Finish . The new behavior group appears on the Notifications > Configure Events page in the Behavior Groups tab. Note You can create and edit multiple behavior groups to include any additional platforms that the notifications service supports. Select Settings > Integrations and click the Communications tab. When the Slack integration is ready to send events to Slack, the Last connection attempt column shows Ready . If the notification reached Slack successfully, the Last connection attempt column shows Success . Verification Create an event that will trigger a Hybrid Cloud Console notification. For example, run insights-client on a system that will trigger a policy event. Wait a few minutes, and then navigate to Slack. In your Slack channel, check for notifications from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Settings > Notifications > Event Log and check for an event that shows the label Integration: Slack . If the label is green, the notification succeeded. If the label is red, the integration might need to be adjusted. If the integration is not working as expected, verify that the incoming webhook connector was properly created in Slack, and that the correct incoming webhook URL is added in the Hybrid Cloud Console integration configuration. Note See Troubleshooting notification failures in the notifications documentation for more details. 1.3.4. Additional resources For detailed information about Slack configuration, see Sending messages using incoming webhooks in the Slack documentation. For more information about behavior groups, see Configuring Hybrid Cloud Console notification behavior groups . For information about troubleshooting your Slack integration, see Troubleshooting Hybrid Cloud Console integrations . | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/assembly-integrating-comms_integrations |
Schedule and quota APIs | Schedule and quota APIs OpenShift Container Platform 4.17 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/schedule_and_quota_apis/index |
Preface | Preface Once you have deployed a Red Hat Quay registry, there are many ways you can further configure and manage that deployment. Topics covered here include: Advanced Red Hat Quay configuration Setting notifications to alert you of a new Red Hat Quay release Securing connections with SSL/TLS certificates Directing action logs storage to Elasticsearch Configuring image security scanning with Clair Scan pod images with the Container Security Operator Integrate Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator Mirroring images with repository mirroring Sharing Red Hat Quay images with a BitTorrent service Authenticating users with LDAP Enabling Quay for Prometheus and Grafana metrics Setting up geo-replication Troubleshooting Red Hat Quay For a complete list of Red Hat Quay configuration fields, see the Configure Red Hat Quay page. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/pr01 |
Chapter 29. KafkaJmxOptions schema reference | Chapter 29. KafkaJmxOptions schema reference Used in: KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , ZookeeperClusterSpec Full list of KafkaJmxOptions schema properties Configures JMX connection options. Get JMX metrics from Kafka brokers, ZooKeeper nodes, Kafka Connect, and MirrorMaker 2. by connecting to port 9999. Use the jmxOptions property to configure a password-protected or an unprotected JMX port. Using password protection prevents unauthorized pods from accessing the port. You can then obtain metrics about the component. For example, for each Kafka broker you can obtain bytes-per-second usage data from clients, or the request rate of the network of the broker. To enable security for the JMX port, set the type parameter in the authentication field to password . Example password-protected JMX configuration for Kafka brokers and ZooKeeper nodes apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: authentication: type: "password" # ... zookeeper: # ... jmxOptions: authentication: type: "password" #... You can then deploy a pod into a cluster and obtain JMX metrics using the headless service by specifying which broker you want to address. For example, to get JMX metrics from broker 0 you specify: " CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers" CLUSTER-NAME -kafka-0 is name of the broker pod, and CLUSTER-NAME -kafka-brokers is the name of the headless service to return the IPs of the broker pods. If the JMX port is secured, you can get the username and password by referencing them from the JMX Secret in the deployment of your pod. For an unprotected JMX port, use an empty object {} to open the JMX port on the headless service. You deploy a pod and obtain metrics in the same way as for the protected port, but in this case any pod can read from the JMX port. Example open port JMX configuration for Kafka brokers and ZooKeeper nodes apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: {} # ... zookeeper: # ... jmxOptions: {} # ... Additional resources For more information on the Kafka component metrics exposed using JMX, see the Apache Kafka documentation . 29.1. KafkaJmxOptions schema properties Property Property type Description authentication KafkaJmxAuthenticationPassword Authentication configuration for connecting to the JMX port. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: authentication: type: \"password\" # zookeeper: # jmxOptions: authentication: type: \"password\" #",
"\" CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: {} # zookeeper: # jmxOptions: {} #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaJmxOptions-reference |
Chapter 9. Using Streams for Apache Kafka with MirrorMaker 2 | Chapter 9. Using Streams for Apache Kafka with MirrorMaker 2 Use MirrorMaker 2 to replicate data between two or more active Kafka clusters, within or across data centers. To configure MirrorMaker 2, edit the config/connect-mirror-maker.properties configuration file. If required, you can enable distributed tracing for MirrorMaker 2 . Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Note MirrorMaker 2 has features not supported by the version of MirrorMaker. However, you can configure MirrorMaker 2 to be used in legacy mode . 9.1. Configuring active/active or active/passive modes You can use MirrorMaker 2 in active/passive or active/active cluster configurations. active/active cluster configuration An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster. active/passive cluster configuration An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination. 9.1.1. Bidirectional replication (active/active) The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 9.1. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 9.1.2. Unidirectional replication (active/passive) The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. 9.2. Configuring MirrorMaker 2 connectors Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters. MirrorMaker 2 consists of the following connectors: MirrorSourceConnector The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run. MirrorCheckpointConnector The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster. MirrorHeartbeatConnector The heartbeat connector periodically checks connectivity between the source and target cluster. The following table describes connector properties and the connectors you configure to use them. Table 9.1. MirrorMaker 2 connector configuration properties Property sourceConnector checkpointConnector heartbeatConnector admin.timeout.ms Timeout for admin tasks, such as detecting new topics. Default is 60000 (1 minute). [✓] [✓] [✓] replication.policy.class Policy to define the remote topic naming convention. Default is org.apache.kafka.connect.mirror.DefaultReplicationPolicy . [✓] [✓] [✓] replication.policy.separator The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.). Separator configuration is only applicable to the DefaultReplicationPolicy replication policy class, which defines remote topic names. The IdentityReplicationPolicy class does not use the property as topics retain their original names. [✓] [✓] [✓] consumer.poll.timeout.ms Timeout when polling the source cluster. Default is 1000 (1 second). [✓] [✓] offset-syncs.topic.location The location of the offset-syncs topic, which can be the source (default) or target cluster. [✓] [✓] topic.filter.class Topic filter to select the topics to replicate. Default is org.apache.kafka.connect.mirror.DefaultTopicFilter . [✓] [✓] config.property.filter.class Topic filter to select the topic configuration properties to replicate. Default is org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter . [✓] config.properties.exclude Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions. [✓] offset.lag.max Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is 100 . [✓] offset-syncs.topic.replication.factor Replication factor for the internal offset-syncs topic. Default is 3 . [✓] refresh.topics.enabled Enables check for new topics and partitions. Default is true . [✓] refresh.topics.interval.seconds Frequency of topic refresh. Default is 600 (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes. You can change the frequency by adding refresh.topics.interval.seconds to the source connector configuration. [✓] replication.factor The replication factor for new topics. Default is 2 . [✓] sync.topic.acls.enabled Enables synchronization of ACLs from the source cluster. Default is true . For more information, see Section 9.5, "ACL rules synchronization" . [✓] sync.topic.acls.interval.seconds Frequency of ACL synchronization. Default is 600 (10 minutes). [✓] sync.topic.configs.enabled Enables synchronization of topic configuration from the source cluster. Default is true . [✓] sync.topic.configs.interval.seconds Frequency of topic configuration synchronization. Default 600 (10 minutes). [✓] checkpoints.topic.replication.factor Replication factor for the internal checkpoints topic. Default is 3 . [✓] emit.checkpoints.enabled Enables synchronization of consumer offsets to the target cluster. Default is true . [✓] emit.checkpoints.interval.seconds Frequency of consumer offset synchronization. Default is 60 (1 minute). [✓] group.filter.class Group filter to select the consumer groups to replicate. Default is org.apache.kafka.connect.mirror.DefaultGroupFilter . [✓] refresh.groups.enabled Enables check for new consumer groups. Default is true . [✓] refresh.groups.interval.seconds Frequency of consumer group refresh. Default is 600 (10 minutes). [✓] sync.group.offsets.enabled Enables synchronization of consumer group offsets to the target cluster __consumer_offsets topic. Default is false . [✓] sync.group.offsets.interval.seconds Frequency of consumer group offset synchronization. Default is 60 (1 minute). [✓] emit.heartbeats.enabled Enables connectivity checks on the target cluster. Default is true . [✓] emit.heartbeats.interval.seconds Frequency of connectivity checks. Default is 1 (1 second). [✓] heartbeats.topic.replication.factor Replication factor for the internal heartbeats topic. Default is 3 . [✓] 9.2.1. Changing the location of the consumer group offsets topic MirrorMaker 2 tracks offsets for consumer groups using internal topics. offset-syncs topic The offset-syncs topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints topic The checkpoints topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. As they are used internally by MirrorMaker 2, you do not interact directly with these topics. MirrorCheckpointConnector emits checkpoints for offset tracking. Offsets for the checkpoints topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. The location of the offset-syncs topic is the source cluster by default. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster. 9.2.2. Synchronizing consumer group offsets The __consumer_offsets topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster. Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true . Synchronization is disabled by default. When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics. Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID error is returned. If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Note If you have an application written in Java, you can use the RemoteClusterUtils.java utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints topic. 9.2.3. Deciding when to use the heartbeat connector The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat topic is located on the target cluster, which allows it to do the following: Identify all source clusters it is mirroring data from Verify the liveness and latency of the mirroring process This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration. 9.2.4. Aligning the configuration of MirrorMaker 2 connectors To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors: replication.policy.class replication.policy.separator offset-syncs.topic.location topic.filter.class For example, the value for replication.policy.class must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings. 9.3. Connector producer and consumer configuration MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings. Important Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change. Producer and consumer configuration applies to all connectors. You specify the configuration in the config/connect-mirror-maker.properties file. Use the properties file to override any default configuration for the producers and consumers in the following format: <source_cluster_name> .consumer. <property> <source_cluster_name> .producer. <property> <target_cluster_name> .consumer. <property> <target_cluster_name> .producer. <property> The following example shows how you configure the producers and consumers. Though the properties are set for all connectors, some configuration properties are only relevant to certain connectors. Example configuration for connector producers and consumers clusters=cluster-1,cluster-2 # ... cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000 9.4. Specifying a maximum number of tasks Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups. Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don't need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks. You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasks.max property. Without specifying a maximum number of tasks, the default setting is a single task. The heartbeat connector always uses a single task. The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasks.max . For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups. tasks.max configuration for MirrorMaker connectors clusters=cluster-1,cluster-2 # ... tasks.max = 10 By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance. 9.5. ACL rules synchronization If AclAuthorizer is being used, ACL rules that manage access to brokers also apply to remote topics. Users that can read a source topic can read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 9.6. Running MirrorMaker 2 in dedicated mode Use MirrorMaker 2 to synchronize data between Kafka clusters through configuration. This procedure shows how to configure and run a dedicated single-node MirrorMaker 2 cluster. Dedicated clusters use Kafka Connect worker nodes to mirror data between Kafka clusters. Note It is also possible to run MirrorMaker 2 in distributed mode. MirrorMaker 2 operates as connectors in both dedicated and distributed modes. When running a dedicated MirrorMaker cluster, connectors are configured in the Kafka Connect cluster. As a consequence, this allows direct access to the Kafka Connect cluster, the running of additional connectors, and use of the REST API. For more information, refer to the Apache Kafka documentation . The configuration must specify: Each Kafka cluster Connection information for each cluster, including TLS authentication The replication flow and direction Cluster to cluster Topic to topic Replication rules Committed offset tracking intervals This procedure describes how to implement MirrorMaker 2 by creating the configuration in a properties file, then passing the properties when using the MirrorMaker script file to set up the connections. You can specify the topics and consumer groups you wish to replicate from a source cluster. You specify the names of the source and target clusters, then specify the topics and consumer groups to replicate. In the following example, topics and consumer groups are specified for replication from cluster 1 to 2. Example configuration to replicate specific topics and consumer groups clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2 You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set these properties. You can also replicate all topics and consumer groups by using .* as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster. Before you begin A sample configuration properties file is provided in ./config/connect-mirror-maker.properties . Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Open the sample properties file in a text editor, or create a new one, and edit the file to include connection information and the replication flows for each Kafka cluster. The following example shows a configuration to connect two clusters, cluster-1 and cluster-2 , bidirectionally. Cluster names are configurable through the clusters property. Example MirrorMaker 2 configuration clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15 1 Each Kafka cluster is identified with its alias. 2 Connection information for cluster-1 , using the bootstrap address and port 443 . Both clusters use port 443 to connect to Kafka using OpenShift Routes . 3 The ssl. properties define TLS configuration for cluster-1 . 4 Connection information for cluster-2 . 5 The ssl. properties define the TLS configuration for cluster-2 . 6 Replication flow enabled from cluster-1 to cluster-2 . 7 Replication flow enabled from cluster-2 to cluster-1 . 8 Replication of all topics from cluster-1 to cluster-2 . The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. 9 Replication of specific topics from cluster-2 to cluster-1 . 10 Replication of all consumer groups from cluster-1 to cluster-2 . The checkpoint connector replicates the specified consumer groups. 11 Replication of specific consumer groups from cluster-2 to cluster-1 . 12 Defines the separator used for the renaming of remote topics. 13 When enabled, ACLs are applied to synchronized topics. The default is false . 14 The period between checks for new topics to synchronize. 15 The period between checks for new consumer groups to synchronize. OPTION: If required, add a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is used for active/passive backups and data migration. replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy OPTION: If you want to synchronize consumer group offsets, add configuration to enable and manage the synchronization: refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3 1 Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default. 2 If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. 3 Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks. Start Kafka in the target clusters: /opt/kafka/bin/kafka-server-start.sh -daemon \ /opt/kafka/config/kraft/server.properties Start MirrorMaker with the cluster connection configuration and replication policies you defined in your properties file: /opt/kafka/bin/connect-mirror-maker.sh \ /opt/kafka/config/connect-mirror-maker.properties MirrorMaker sets up connections between the clusters. For each target cluster, verify that the topics are being replicated: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list 9.7. (Deprecated) Using MirrorMaker 2 in legacy mode This procedure describes how to configure MirrorMaker 2 to use it in legacy mode. Legacy mode supports the version of MirrorMaker. The MirrorMaker script /opt/kafka/bin/kafka-mirror-maker.sh can run MirrorMaker 2 in legacy mode. Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. Kafka MirrorMaker 1 will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use MirrorMaker 2 with the IdentityReplicationPolicy . Prerequisites You need the properties files you currently use with the legacy version of MirrorMaker. /opt/kafka/config/consumer.properties /opt/kafka/config/producer.properties Procedure Edit the MirrorMaker consumer.properties and producer.properties files to turn off MirrorMaker 2 features. For example: replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false 1 Emulate the version of MirrorMaker. 2 MirrorMaker 2 features disabled, including the internal checkpoint and heartbeat topics Save the changes and restart MirrorMaker with the properties files you used with the version of MirrorMaker: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh \ --consumer.config /opt/kafka/config/consumer.properties \ --producer.config /opt/kafka/config/producer.properties \ --num.streams=2 The consumer properties provide the configuration for the source cluster and the producer properties provide the target cluster configuration. MirrorMaker sets up connections between the clusters. Start Kafka in the target cluster: su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties For the target cluster, verify that the topics are being replicated: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list | [
"clusters=cluster-1,cluster-2 cluster-1.consumer.fetch.max.bytes=52428800 cluster-2.producer.batch.size=327680 cluster-2.producer.linger.ms=100 cluster-2.producer.request.timeout.ms=30000",
"clusters=cluster-1,cluster-2 tasks.max = 10",
"clusters=cluster-1,cluster-2 cluster-1->cluster-2.topics = topic-1, topic-2 cluster-1->cluster-2.groups = group-1, group-2",
"clusters=cluster-1,cluster-2 1 cluster-1.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_one>:443 2 cluster-1.security.protocol=SSL 3 cluster-1.ssl.truststore.password=<truststore_name> cluster-1.ssl.truststore.location=<path_to_truststore>/truststore.cluster-1.jks_ cluster-1.ssl.keystore.password=<keystore_name> cluster-1.ssl.keystore.location=<path_to_keystore>/user.cluster-1.p12 cluster-2.bootstrap.servers=<cluster_name>-kafka-bootstrap-<project_name_two>:443 4 cluster-2.security.protocol=SSL 5 cluster-2.ssl.truststore.password=<truststore_name> cluster-2.ssl.truststore.location=<path_to_truststore>/truststore.cluster-2.jks_ cluster-2.ssl.keystore.password=<keystore_name> cluster-2.ssl.keystore.location=<path_to_keystore>/user.cluster-2.p12 cluster-1->cluster-2.enabled=true 6 cluster-2->cluster-1.enabled=true 7 cluster-1->cluster-2.topics=.* 8 cluster-2->cluster-1.topics=topic-1, topic-2 9 cluster-1->cluster-2.groups=.* 10 cluster-2->cluster-1.groups=group-1, group-2 11 replication.policy.separator=- 12 sync.topic.acls.enabled=false 13 refresh.topics.interval.seconds=60 14 refresh.groups.interval.seconds=60 15",
"replication.policy.class=org.apache.kafka.connect.mirror.IdentityReplicationPolicy",
"refresh.groups.interval.seconds=60 sync.group.offsets.enabled=true 1 sync.group.offsets.interval.seconds=60 2 emit.checkpoints.interval.seconds=60 3",
"/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties",
"/opt/kafka/bin/connect-mirror-maker.sh /opt/kafka/config/connect-mirror-maker.properties",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list",
"replication.policy.class=org.apache.kafka.mirror.LegacyReplicationPolicy 1 refresh.topics.enabled=false 2 refresh.groups.enabled=false emit.checkpoints.enabled=false emit.heartbeats.enabled=false sync.topic.configs.enabled=false sync.topic.acls.enabled=false",
"su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2",
"su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-mirrormaker-str |
probe::stap.cache_add_nss | probe::stap.cache_add_nss Name probe::stap.cache_add_nss - Add NSS (Network Security Services) information to cache Synopsis stap.cache_add_nss Values source_path the path the .sgn file is coming from (incl filename) dest_path the path the .sgn file is coming from (incl filename) Description Fires just before the file is actually moved. Note: stap must compiled with NSS support; if moving the kernel module fails, this probe will not fire. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-cache-add-nss |
Chapter 4. Creating a new Apache Camel JUnit test case | Chapter 4. Creating a new Apache Camel JUnit test case Overview A common way of testing routes is to use JUnit. The design time tooling includes a wizard that simplifies creating a JUnit test case for your routes. The wizard uses the endpoints you specify to generate the starting point code and configuration for the test. Note After you create the boilerplate JUnit test case, you need to modify it to add expectations and assertions specific to the route that you've created or modified, so the test is valid for the route. Prerequisites Before you create a new JUnit test case, you need to perform a preliminary task: If you are replacing an existing JUnit test case, you need to delete it before you create a new one. See the section called "Deleting and existing JUnit test case" . If you are creating a new JUnit test case in a project that hasn't one, you need to first create the project_root /src/test/java folder for the test case that is included in the build path. See the section called "Creating and adding the src/test/java folder to the build path" . Deleting and existing JUnit test case In the Project Explorer view, expand the project's root node to expose the <root_project> /src/test/java folder. Locate the JUnit test case file in the /src/test/java folder. Depending on which DSL the project is based on, the JUnit test case file is named BlueprintXmlTest.java or CamelContextXmlTest.java . Right-click the JUnit test case .java file to open the context menu, and then select Delete . The JUnit test case .java file disappears from the Project Explorer view. You can now create a new JUnit test case . Creating and adding the src/test/java folder to the build path In the Project Explorer view, right-click the project's root to open the context menu. Select New Folder to open the Create a new folder resource wizard. In the wizard's project tree pane, expand the project's root node and select the src folder. Make sure <project_root> /src appears in the Enter or select the parent folder field. In Folder name , enter /test/java . This folder will store the new JUnit test case you create. Click Finish . In the Project Explorer view, the new src/test/java folder appears under the src/main/resources folder. You can verify that this folder is on the class path by opening its context menu and selecting Build Path . If Remove from Build Path is a menu option, you know the src/test/java folder is on the class path. You can now create a new JUnit test case . Creating a JUnit test case To create a new JUnit test case for your route: In the Project Explorer view, select the routing context .xml file in your project. Right-click it to open the context menu, and then select New Camel Test Case to open the New Camel JUnit Test Case wizard, as shown in Figure 4.1, "New Camel JUnit Test Case wizard" . Figure 4.1. New Camel JUnit Test Case wizard Alternatively, you can open the wizard by selecting File New Other > Fuse > Camel Test Case from the menu bar. In Source folder , accept the default location of the source code for the test case, or enter another location. You can click to search for a location. In Package , accept the default package name for the generated test code, or enter another package name. You can click to search for a package. In Camel XML file under test , accept the default pathname of the routing context file that contains the route you want to test, or enter another pathname. You can click to search for a context file. In Name , accept the default name for the generated test class, or enter another name. Select the method stubs you want to include in the generated code. If you want to include the default generated comments in the generated code, check the Generate comments box. Click to open the Test Endpoints page. For example, Figure 4.2, "New Camel JUnit Test Case page" shows a route's input and output file endpoints selected. Figure 4.2. New Camel JUnit Test Case page Under Available endpoints , select the endpoints you want to test. Click the checkbox to any selected endpoint to deselect it. Click . Note If prompted, add JUnit to the build path. The artifacts for the test are added to your project and appear in the Project Explorer view under src/test/java . The class implementing the test case opens in the Java editor. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/newcameltestcase |
Windows Container Support for OpenShift | Windows Container Support for OpenShift OpenShift Container Platform 4.16 Red Hat OpenShift for Windows Containers Guide Red Hat OpenShift Documentation Team | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc create -f <file-name>.yaml",
"oc create -f wmco-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator",
"oc create -f <file-name>.yaml",
"oc create -f wmco-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4",
"oc create -f <file-name>.yaml",
"oc create -f wmco-sub.yaml",
"oc get csv -n openshift-windows-machine-config-operator",
"NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded",
"oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1",
"skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"skopeo copy docker://mcr.microsoft.com/oss/kubernetes/pause:3.9 docker://example.io/oss/kubernetes/pause:3.9",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example2/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com source: registry.redhat.io mirrorSourcePolicy: NeverContactSource - mirrors: - docker.io source: docker-mirror.internal mirrorSourcePolicy: AllowContactingSource",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.29.4 ip-10-0-138-148.ec2.internal Ready master 11m v1.29.4 ip-10-0-139-122.ec2.internal Ready master 11m v1.29.4 ip-10-0-147-35.ec2.internal Ready worker 7m v1.29.4 ip-10-0-153-12.ec2.internal Ready worker 7m v1.29.4 ip-10-0-154-10.ec2.internal Ready master 11m v1.29.4",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"tree USDconfig_path",
"C:/k/containerd/registries/ |── registry.access.redhat.com | └── hosts.toml |── mirror.example.com | └── hosts.toml └── docker.io └── hosts.toml:",
"cat \"USDconfig_path\"/registry.access.redhat.com/host.toml server = \"https://registry.access.redhat.com\" # default fallback server since \"AllowContactingSource\" mirrorSourcePolicy is set [host.\"https://example.io/example/ubi-minimal\"] capabilities = [\"pull\"] secondary mirror capabilities = [\"pull\"] cat \"USDconfig_path\"/registry.redhat.io/host.toml \"server\" omitted since \"NeverContactSource\" mirrorSourcePolicy is set [host.\"https://mirror.example.com\"] capabilities = [\"pull\"] cat \"USDconfig_path\"/docker.io/host.toml server = \"https://docker.io\" [host.\"https://docker-mirror.internal\"] capabilities = [\"pull\", \"resolve\"] # resolve tags",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"C:\\> powershell",
"C:\\> Restart-Computer -Force",
"C:\\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip>",
"C:\\> ipconfig | findstr /C:\"Default Gateway\"",
"oc adm uncordon <node1>",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8",
"aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2022*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table",
"aws ec2 describe-images --region <aws_region_name> --filters \"Name=name,Values=Windows_Server-2019*English*Core*Base*\" \"Name=is-public,Values=true\" --query \"reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}\" --output table",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: \"\" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: \"<zone>\" 16",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 9 categories: null cluster: 10 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 11 image: 12 name: <image_id> type: name kind: NutanixMachineProviderConfig 13 memorySize: 16Gi 14 project: type: \"\" subnets: 15 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 16 userDataSecret: name: windows-user-data 17 vcpuSockets: 4 18 vcpusPerSocket: 1 19",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"exclude-nics=",
"C:\\> ipconfig",
"PS C:\\> Get-Service -Name VMTools | Select Status, StartType",
"PS C:\\> New-NetFirewallRule -DisplayName \"ContainerLogsPort\" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow",
"C:\\> C:\\Windows\\System32\\Sysprep\\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <unattend xmlns=\"urn:schemas-microsoft-com:unattend\"> <settings pass=\"specialize\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-International-Core\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Security-SPP-UX\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-SQMApi\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass=\"oobeSystem\"> <component xmlns:wcm=\"http://schemas.microsoft.com/WMIConfig/2002/State\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" name=\"Microsoft-Windows-Shell-Setup\" processorArchitecture=\"amd64\" publicKeyToken=\"31bf3856ad364e35\" language=\"neutral\" versionScope=\"nonSxS\"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: \"\" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: \"windows\" - effect: NoSchedule key: os operator: Equal value: \"Windows\"",
"oc create -f <file-name>.yaml",
"oc create -f runtime-class.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1",
"apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer",
"apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: \"ContainerAdministrator\" os: name: \"windows\" runtimeClassName: windows2019 3",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api",
"oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines.machine.openshift.io",
"kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core",
"kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core",
"oc get machine -n openshift-machine-api",
"oc delete machine <machine> -n openshift-machine-api",
"oc delete --all pods --namespace=openshift-windows-machine-config-operator",
"oc get pods --namespace openshift-windows-machine-config-operator",
"oc delete namespace openshift-windows-machine-config-operator"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/windows_container_support_for_openshift/index |
Chapter 4. Red Hat build of OpenJDK features | Chapter 4. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 17 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 17 releases. Note For all the other changes and security fixes, see OpenJDK 17.0.12 Released . Red Hat build of OpenJDK enhancements Red Hat build of OpenJDK 17 provides enhancements to features originally created in releases of Red Hat build of OpenJDK. Fallback option for POST -only OCSP requests JDK-8175903 , which was introduced in Red Hat build of OpenJDK 17, added support for using the HTTP GET method for Online Certificate Status Protocol (OCSP) requests. This feature was enabled unconditionally for small requests. The Internet Engineering Task Force (IETF) RFC 5019 and RFC 6960 explicitly allow and recommend the use of HTTP GET requests. However, some OCSP responders do not work well with these types of requests. Red Hat build of OpenJDK 17.0.12 introduces a JDK system property, com.sun.security.ocsp.useget . By default, this property is set to true , which retains the current behavior of using GET requests for small requests. If this property is set to false , only HTTP POST requests are used, regardless of size. Note This fallback option for POST -only OCSP requests is a non-standard feature, which might be removed in a future release if the use of HTTP GET requests with OCSP responders no longer causes any issues. See JDK-8328638 (JDK Bug System) . DTLS 1.0 is disabled by default OpenJDK 9 introduced support for both version 1.0 and version 1.2 of the Datagram Transport Layer Security (DTLS) protocol ( JEP-219 ). DTLSv1.0, which is based on TLS 1.1, is no longer recommended for use, because this protocol is considered weak and insecure by modern standards. In Red Hat build of OpenJDK 17.0.12, if you attempt to use DTLSv1.0, the JDK throws an SSLHandshakeException by default. If you want to continue using DTLSv1.0, you can remove DTLSv1.0 from the jdk.tls.disabledAlgorithms system property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of DTLSv1.0 is not recommended and is at the user's own risk. See JDK-8256660 (JDK Bug System) . RPATH preferred over RUNPATH for USDORIGIN runtime search paths in internal JDK binaries Native executables and libraries in the JDK use embedded runtime search paths (rpaths) to locate required internal JDK native libraries. On Linux systems, binaries can specify these search paths by using either DT_RPATH or DT_RUNPATH . If a binary specifies search paths by using DT_RPATH , these paths are searched before any paths that are specified in the LD_LIBRARY_PATH environment variable. If a binary specifies search paths by using DT_RUNPATH , these paths are searched only after paths that are specified in LD_LIBRARY_PATH . This means that the use of DT_RUNPATH can allow JDK internal libraries to be overridden by any libraries of the same name that are specified in LD_LIBRARY_PATH , which is undesirable from a security perspective. In earlier releases, the type of runtime search path used was based on the default search path for the dynamic linker. In Red Hat build of OpenJDK 17.0.12, to ensure that DT_RPATH is used, the --disable-new-dtags option is explicitly passed to the linker. See JDK-8326891 (JDK Bug System) . TrimNativeHeapInterval option available as a product switch Red Hat build of OpenJDK 17.0.12 provides the -XX:TrimNativeHeapInterval=ms option as an official product switch. This enhancement enables the JVM to trim the native heap at specified intervals (in milliseconds) on supported platforms. Currently, the only supported platform for this enhancement is Linux with glibc . You can disable trimming by setting TrimNativeHeapInterval=0 . The trimming feature is disabled by default. See JDK-8325496 (JDK Bug System) . -XshowSettings launcher option includes a security category In Red Hat build of OpenJDK 17.0.12, the -XshowSettings launcher option includes a security category, which allows the following arguments to be passed: Argument Details -XshowSettings:security or -XshowSettings:security:all Show all security settings and continue. -XshowSettings:security:properties Show security properties and continue. -XshowSettings:security:providers Show static security provider settings and continue. -XshowSettings:security:tls Show TLS-related security settings and continue. If third-party security providers are included in the application class path or module path, and configured in the java.security file, the output includes these third-party security providers. See JDK-8281658 (JDK Bug System) . GlobalSign R46 and E46 root certificates added In Red Hat build of OpenJDK 17.0.12, the cacerts truststore includes two GlobalSign TLS root certificates: Certificate 1 Name: GlobalSign Alias name: globalsignr46 Distinguished name: CN=GlobalSign Root R46, O=GlobalSign nv-sa, C=BE Certificate 2 Name: GlobalSign Alias name: globalsigne46 Distinguished name: CN=GlobalSign Root E46, O=GlobalSign nv-sa, C=BE See JDK-8316138 (JDK Bug System) . Fix for long garbage collection pauses due to imbalanced iteration during the Code Root Scan phase The Code Root Scan phase of garbage collection finds references to Java objects within compiled code. To speed up this process, a cache is maintained within each region of the compiled code that contains references into the Java heap. On the assumption that the set of references was small, releases used a single thread per region to iterate through these references. This single-threaded approach introduced a scalability bottleneck, where performance could be reduced if a specific region contained a large number of references. In Red Hat build of OpenJDK 17.0.12, multiple threads are used, which helps to remove any scalability bottleneck. See JDK-8315503 (JDK Bug System) . Change in behavior for AWT headless mode detection on Windows In earlier releases, unless the java.awt.headless system property was set to true , a call to java.awt.GraphicsEnvironment.isHeadless() returned false on Windows Server platforms. From Red Hat build of OpenJDK 17.0.12 onward, unless the java.awt.headless property is explicitly set to false and if no valid monitor is detected on the current system at runtime, a call to java.awt.GraphicsEnvironment.isHeadless() returns true on Windows Server platforms. A valid monitor might not be detected, for example, if a session was initiated by a service or by PowerShell remoting. This change in behavior means that applications running under these conditions, which previously expected to run in a headful context, might now encounter unexpected HeadlessException errors being thrown by Abstract Window Toolkit (AWT) operations. You can reinstate the old behavior by setting the java.awt.headless property to false . However, if applications are running in headful mode and a valid display is not available, these applications are likely to continue experiencing unexpected issues. See JDK-8185862 (JDK Bug System) . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.12/rn_openjdk-17012-features_openjdk |
Chapter 6. Changing the power button behavior | Chapter 6. Changing the power button behavior When you press the power button on your computer, it suspends or shuts down the system by default. You can customize this behavior according to your preferences. 6.1. Changing the behavior of the power button when pressing the button and GNOME is not running When you press the power button in a non-graphical systemd target, it shuts down the system by default. You can customize this behavior according to your preferences. Prerequisites Administrative access. Procedure Edit the /etc/systemd/logind.conf configuration file and set the HandlePowerKey=poweroff variable to one of the following options: poweroff Shut down the computer. reboot Reboot the system. halt Initiate a system halt. kexec Initiate a kexec reboot. suspend Suspend the system. hibernate Initiate system hibernation. ignore Do nothing. For example, to reboot the system upon pressing the power button, use this setting: 6.2. Changing the behavior of the power button when pressing the button and GNOME is running On the graphical login screen or in the graphical user session, pressing the power button suspends the machine by default. This happens both in cases when the user presses the power button physically or when pressing a virtual power button from a remote console. You can select a different power button behavior. Procedure Create a local database for system-wide settings in the /etc/dconf/db/local.d/01-power file with the following content: Replace <value> with one of the following power button actions: nothing Does nothing . suspend Suspends the system. hibernate Hibernates the system. interactive Shows a pop-up query asking the user what to do. With interactive mode, the system powers off automatically after 60 seconds when pressing the power button. However, you can choose a different behavior from the pop-up query. Optional: Override the user's setting, and prevent the user from changing it. Enter the following configuration in the /etc/dconf/db/local.d/locks/01-power file: Update the system databases: Log out and back in again for the system-wide settings to take effect. | [
"HandlePowerKey=reboot",
"[org/gnome/settings-daemon/plugins/power] power-button-action=<value>",
"/org/gnome/settings-daemon/plugins/power/power-button-action",
"dconf update"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/administering_the_system_using_the_gnome_desktop_environment/changing-the-power-button-behavior_administering-the-system-using-the-gnome-desktop-environment |
Chapter 5. Managing Red Hat Gluster Storage Servers and Volumes using Red Hat Virtualization Manager | Chapter 5. Managing Red Hat Gluster Storage Servers and Volumes using Red Hat Virtualization Manager You can create and configure Red Hat Gluster Storage volumes using Red Hat Virtualization Manager 3.3 or later by creating a separate cluster with the Enable Gluster Service option enabled. Note Red Hat Gluster Storage nodes must be managed in a separate cluster to Red Hat Virtualization hosts. If you want to configure combined management of virtualization hosts and storage servers, see the Red Hat Hyperconverged Infrastructure documentation: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.0/html/deploying_red_hat_hyperconverged_infrastructure/ A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. Most of the management operations for Red Hat Gluster Storage happen on these volumes. You can use Red Hat Virtualization Manager to create and start new volumes featuring a single global namespace. Note With the exception of the volume operations described in this section, all other Red Hat Gluster Storage functionalities must be executed from the command line. 5.1. Creating a Data Center Select the Data Centers resource tab to list all data centers in the results list. Click the New button to open the New Data Center window. Figure 5.1. New Data Center Window Enter the Name and Description of the data center. Set Type to Shared from the drop-down menu. Set Quota Mode as Disabled . Click OK . The new data center is added to the virtualization environment. It will remain Uninitialized until a cluster, host, and storage are configured. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/chap-Managing_Red_Hat_Storage_Servers_and_Volumes_using_Red_Hat_Enterprise_Virtualization_Manager |
Chapter 1. Release notes | Chapter 1. Release notes 1.1. Logging 5.9 Note Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Service on AWS. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.1.1. Logging 5.9.3 This release includes OpenShift Logging Bug Fix Release 5.9.3 1.1.1.1. Bug Fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5614 ) Before this update, monitoring the Vector collector output buffer state was not possible. With this update, monitoring and alerting the Vector collector output buffer size is possible that improves observability capabilities and helps keep the system running optimally. ( LOG-5586 ) 1.1.1.2. CVEs CVE-2024-2961 CVE-2024-28182 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.1.2. Logging 5.9.2 This release includes OpenShift Logging Bug Fix Release 5.9.2 1.1.2.1. Bug Fixes Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. ( LOG-4910 ) Before this update, the rotated infrastructure log files were sent to the application index in some scenarios due to an incorrect configuration in the Vector log collector. With this update, the Vector log collector configuration avoids collecting any rotated infrastructure log files. ( LOG-5156 ) Before this update, the Logging Operator did not monitor changes to the grafana-dashboard-cluster-logging config map. With this update, the Logging Operator monitors changes in the ConfigMap objects, ensuring the system stays synchronized and responds effectively to config map modifications. ( LOG-5308 ) Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5426 ) Before this change, the Fluentd out_http plugin ignored the no_proxy environment variable. With this update, the Fluentd patches the HTTP#start method of ruby to honor the no_proxy environment variable. ( LOG-5466 ) 1.1.2.2. CVEs CVE-2022-48554 CVE-2023-2975 CVE-2023-3446 CVE-2023-3817 CVE-2023-5678 CVE-2023-6129 CVE-2023-6237 CVE-2023-7008 CVE-2023-45288 CVE-2024-0727 CVE-2024-22365 CVE-2024-25062 CVE-2024-28834 CVE-2024-28835 1.1.3. Logging 5.9.1 This release includes OpenShift Logging Bug Fix Release 5.9.1 1.1.3.1. Enhancements Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5401 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5395 ) 1.1.3.2. Bug Fixes Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. ( LOG-5268 ) Before this update, a prune filter without a defined pruneFilterSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5322 ) Before this update, a drop filter without a defined dropTestsSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5323 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5397 ) Before this update, poorly formatted timestamp fields in audit log records led to WARN messages in Red Hat OpenShift Logging Operator logs. With this update, a remap transformation ensures that the timestamp field is properly formatted. ( LOG-4672 ) Before this update, the error message thrown while validating a ClusterLogForwarder resource name and namespace did not correspond to the correct error. With this update, the system checks if a ClusterLogForwarder resource with the same name exists in the same namespace. If not, it corresponds to the correct error. ( LOG-5062 ) Before this update, the validation feature for output config required a TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message are more informative. ( LOG-5307 ) Before this update, defining an infrastructure input type did not exclude logging workloads from the collection. With this update, the collection excludes logging services to avoid feedback loops. ( LOG-5309 ) 1.1.3.3. CVEs No CVEs. 1.1.4. Logging 5.9.0 This release includes OpenShift Logging Bug Fix Release 5.9.0 1.1.4.1. Removal notice The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. Instances of OpenShift Elasticsearch Operator from prior logging releases, remain supported until the EOL of the logging release. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 1.1.4.2. Deprecation notice In Logging 5.9, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of Red Hat OpenShift Service on AWS. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. In Logging 5.9, the Fields option for the Splunk output type was never implemented and is now deprecated. It will be removed in a future release. 1.1.4.3. Enhancements 1.1.4.3.1. Log Collection This enhancement adds the ability to refine the process of log collection by using a workload's metadata to drop or prune logs based on their content. Additionally, it allows the collection of infrastructure logs, such as journal or container logs, and audit logs, such as kube api or ovn logs, to only collect individual sources. ( LOG-2155 ) This enhancement introduces a new type of remote log receiver, the syslog receiver. You can configure it to expose a port over a network, allowing external systems to send syslog logs using compatible tools such as rsyslog. ( LOG-3527 ) With this update, the ClusterLogForwarder API now supports log forwarding to Azure Monitor Logs, giving users better monitoring abilities. This feature helps users to maintain optimal system performance and streamline the log analysis processes in Azure Monitor, which speeds up issue resolution and improves operational efficiency. ( LOG-4605 ) This enhancement improves collector resource utilization by deploying collectors as a deployment with two replicas. This occurs when the only input source defined in the ClusterLogForwarder custom resource (CR) is a receiver input instead of using a daemon set on all nodes. Additionally, collectors deployed in this manner do not mount the host file system. To use this enhancement, you need to annotate the ClusterLogForwarder CR with the logging.openshift.io/dev-preview-enable-collector-as-deployment annotation. ( LOG-4779 ) This enhancement introduces the capability for custom tenant configuration across all supported outputs, facilitating the organization of log records in a logical manner. However, it does not permit custom tenant configuration for logging managed storage. ( LOG-4843 ) With this update, the ClusterLogForwarder CR that specifies an application input with one or more infrastructure namespaces like default , openshift* , or kube* , now requires a service account with the collect-infrastructure-logs role. ( LOG-4943 ) This enhancement introduces the capability for tuning some output settings, such as compression, retry duration, and maximum payloads, to match the characteristics of the receiver. Additionally, this feature includes a delivery mode to allow administrators to choose between throughput and log durability. For example, the AtLeastOnce option configures minimal disk buffering of collected logs so that the collector can deliver those logs after a restart. ( LOG-5026 ) This enhancement adds three new Prometheus alerts, warning users about the deprecation of Elasticsearch, Fluentd, and Kibana. ( LOG-5055 ) 1.1.4.3.2. Log Storage This enhancement in LokiStack improves support for OTEL by using the new V13 object storage format and enabling automatic stream sharding by default. This also prepares the collector for future enhancements and configurations. ( LOG-4538 ) This enhancement introduces support for short-lived token workload identity federation with Azure and AWS log stores for STS enabled Red Hat OpenShift Service on AWS 4.14 and later clusters. Local storage requires the addition of a CredentialMode: static annotation under spec.storage.secret in the LokiStack CR. ( LOG-4540 ) With this update, the validation of the Azure storage secret is now extended to give early warning for certain error conditions. ( LOG-4571 ) With this update, Loki now adds upstream and downstream support for GCP workload identity federation mechanism. This allows authenticated and authorized access to the corresponding object storage services. ( LOG-4754 ) 1.1.4.4. Bug Fixes Before this update, the logging must-gather could not collect any logs on a FIPS-enabled cluster. With this update, a new oc client is available in cluster-logging-rhel9-operator , and must-gather works properly on FIPS clusters. ( LOG-4403 ) Before this update, the LokiStack ruler pods could not format the IPv6 pod IP in HTTP URLs used for cross-pod communication. This issue caused querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the problem. Now, querying rules and alerts through the Prometheus-compatible API works just like in IPv4 environments. ( LOG-4709 ) Before this fix, the YAML content from the logging must-gather was exported in a single line, making it unreadable. With this update, the YAML white spaces are preserved, ensuring that the file is properly formatted. ( LOG-4792 ) Before this update, when the ClusterLogForwarder CR was enabled, the Red Hat OpenShift Logging Operator could run into a nil pointer exception when ClusterLogging.Spec.Collection was nil. With this update, the issue is now resolved in the Red Hat OpenShift Logging Operator. ( LOG-5006 ) Before this update, in specific corner cases, replacing the ClusterLogForwarder CR status field caused the resourceVersion to constantly update due to changing timestamps in Status conditions. This condition led to an infinite reconciliation loop. With this update, all status conditions synchronize, so that timestamps remain unchanged if conditions stay the same. ( LOG-5007 ) Before this update, there was an internal buffering behavior to drop_newest to address high memory consumption by the collector resulting in significant log loss. With this update, the behavior reverts to using the collector defaults. ( LOG-5123 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5165 ) Before this update, the configuration of the Loki Operator ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5212 ) 1.1.4.5. Known Issues None. 1.1.4.6. CVEs CVE-2023-5363 CVE-2023-5981 CVE-2023-46218 CVE-2024-0553 CVE-2023-0567 1.2. Logging 5.8 Note Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Service on AWS. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.2.1. Logging 5.8.4 This release includes OpenShift Logging Bug Fix Release 5.8.4 . 1.2.1.1. Bug fixes Before this update, the developer console's logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, all supported OCP versions ensure correct namespace inclusion. ( LOG-4905 ) Before this update, the Cluster Logging Operator deployed ClusterRoles supporting LokiStack deployments only when the default log output was LokiStack. With this update, the roles are split into two groups: read and write. The write roles deploys based on the setting of the default log storage, just like all the roles used to do before. The read roles deploys based on whether the logging console plugin is active. ( LOG-4987 ) Before this update, multiple ClusterLogForwarders defining the same input receiver name had their service endlessly reconciled because of changing ownerReferences on one service. With this update, each receiver input will have its own service named with the convention of <CLF.Name>-<input.Name> . ( LOG-5009 ) Before this update, the ClusterLogForwarder did not report errors when forwarding logs to cloudwatch without a secret. With this update, the following error message appears when forwarding logs to cloudwatch without a secret: secret must be provided for cloudwatch output . ( LOG-5021 ) Before this update, the log_forwarder_input_info included application , infrastructure , and audit input metric points. With this update, http is also added as a metric point. ( LOG-5043 ) 1.2.1.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2022-3545 CVE-2022-24963 CVE-2022-36402 CVE-2022-41858 CVE-2023-2166 CVE-2023-2176 CVE-2023-3777 CVE-2023-3812 CVE-2023-4015 CVE-2023-4622 CVE-2023-4623 CVE-2023-5178 CVE-2023-5363 CVE-2023-5388 CVE-2023-5633 CVE-2023-6679 CVE-2023-7104 CVE-2023-27043 CVE-2023-38409 CVE-2023-40283 CVE-2023-42753 CVE-2023-43804 CVE-2023-45803 CVE-2023-46813 CVE-2024-20918 CVE-2024-20919 CVE-2024-20921 CVE-2024-20926 CVE-2024-20945 CVE-2024-20952 1.2.2. Logging 5.8.3 This release includes Logging Bug Fix 5.8.3 and Logging Security Fix 5.8.3 1.2.2.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. ( LOG-4969 ) Before this update, Loki outputs configured without a valid URL caused the collector pods to crash. With this update, outputs are subject to URL validation, resolving the issue. ( LOG-4822 ) Before this update the Cluster Logging Operator would generate collector configuration fields for outputs that did not specify a secret to use the service account bearer token. With this update, an output does not require authentication, resolving the issue. ( LOG-4962 ) Before this update, the tls.insecureSkipVerify field of an output was not set to a value of true without a secret defined. With this update, a secret is no longer required to set this value. ( LOG-4963 ) Before this update, output configurations allowed the combination of an insecure (HTTP) URL with TLS authentication. With this update, outputs configured for TLS authentication require a secure (HTTPS) URL. ( LOG-4893 ) 1.2.2.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2023-7104 CVE-2023-27043 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 1.2.3. Logging 5.8.2 This release includes OpenShift Logging Bug Fix Release 5.8.2 . 1.2.3.1. Bug fixes Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4890 ) Before this update, the developer console logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, namespace inclusion has been corrected, resolving the issue. ( LOG-4947 ) Before this update, the logging view plugin of the Red Hat OpenShift Service on AWS web console did not allow for custom node placement and tolerations. With this update, defining custom node placements and tolerations has been added to the logging view plugin of the Red Hat OpenShift Service on AWS web console. ( LOG-4912 ) 1.2.3.2. CVEs CVE-2022-44638 CVE-2023-1192 CVE-2023-5345 CVE-2023-20569 CVE-2023-26159 CVE-2023-39615 CVE-2023-45871 1.2.4. Logging 5.8.1 This release includes OpenShift Logging Bug Fix Release 5.8.1 and OpenShift Logging Bug Fix Release 5.8.1 Kibana . 1.2.4.1. Enhancements 1.2.4.1.1. Log Collection With this update, while configuring Vector as a collector, you can add logic to the Red Hat OpenShift Logging Operator to use a token specified in the secret in place of the token associated with the service account. ( LOG-4780 ) With this update, the BoltDB Shipper Loki dashboards are now renamed to Index dashboards. ( LOG-4828 ) 1.2.4.2. Bug fixes Before this update, the ClusterLogForwarder created empty indices after enabling the parsing of JSON logs, even when the rollover conditions were not met. With this update, the ClusterLogForwarder skips the rollover when the write-index is empty. ( LOG-4452 ) Before this update, the Vector set the default log level incorrectly. With this update, the correct log level is set by improving the enhancement of regular expression, or regexp , for log level detection. ( LOG-4480 ) Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4683 ) Before this update, Fluentd collector pods were in a CrashLoopBackOff state due to binding of the Prometheus server on IPv6 clusters. With this update, the collectors work properly on IPv6 clusters. ( LOG-4706 ) Before this update, the Red Hat OpenShift Logging Operator would undergo numerous reconciliations whenever there was a change in the ClusterLogForwarder . With this update, the Red Hat OpenShift Logging Operator disregards the status changes in the collector daemonsets that triggered the reconciliations. ( LOG-4741 ) Before this update, the Vector log collector pods were stuck in the CrashLoopBackOff state on IBM Power machines. With this update, the Vector log collector pods start successfully on IBM Power architecture machines. ( LOG-4768 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Fluentd collector pods. With this update, the log collector service account is used by default for authentication, using the associated token and ca.crt . ( LOG-4791 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Vector collector pods. With this update, the log collector service account is used by default for authentication and also using the associated token and ca.crt . ( LOG-4852 ) Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. ( LOG-4811 ) Before this update, it was necessary to create a ClusterRoleBinding to collect audit permissions for HTTP receiver inputs. With this update, it is not necessary to create the ClusterRoleBinding because the endpoint already depends upon the cluster certificate authority. ( LOG-4815 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4836 ) Before this update, while removing the inputs.receiver section in the ClusterLogForwarder , the HTTP input services and its associated secrets were not deleted. With this update, the HTTP input resources are deleted when not needed. ( LOG-4612 ) Before this update, the ClusterLogForwarder indicated validation errors in the status, but the outputs and the pipeline status did not accurately reflect the specific issues. With this update, the pipeline status displays the validation failure reasons correctly in case of misconfigured outputs, inputs, or filters. ( LOG-4821 ) Before this update, changing a LogQL query that used controls such as time range or severity changed the label matcher operator defining it like a regular expression. With this update, regular expression operators remain unchanged when updating the query. ( LOG-4841 ) 1.2.4.3. CVEs CVE-2007-4559 CVE-2021-3468 CVE-2021-3502 CVE-2021-3826 CVE-2021-43618 CVE-2022-3523 CVE-2022-3565 CVE-2022-3594 CVE-2022-4285 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1076 CVE-2023-1079 CVE-2023-1206 CVE-2023-1249 CVE-2023-1252 CVE-2023-1652 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-2731 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3316 CVE-2023-3358 CVE-2023-3576 CVE-2023-3609 CVE-2023-3772 CVE-2023-3773 CVE-2023-4016 CVE-2023-4128 CVE-2023-4155 CVE-2023-4194 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4273 CVE-2023-4641 CVE-2023-22745 CVE-2023-26545 CVE-2023-26965 CVE-2023-26966 CVE-2023-27522 CVE-2023-29491 CVE-2023-29499 CVE-2023-30456 CVE-2023-31486 CVE-2023-32324 CVE-2023-32573 CVE-2023-32611 CVE-2023-32665 CVE-2023-33203 CVE-2023-33285 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-34410 CVE-2023-35825 CVE-2023-36054 CVE-2023-37369 CVE-2023-38197 CVE-2023-38545 CVE-2023-38546 CVE-2023-39191 CVE-2023-39975 CVE-2023-44487 1.2.5. Logging 5.8.0 This release includes OpenShift Logging Bug Fix Release 5.8.0 and OpenShift Logging Bug Fix Release 5.8.0 Kibana . 1.2.5.1. Deprecation notice In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of Red Hat OpenShift Service on AWS. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. 1.2.5.2. Enhancements 1.2.5.2.1. Log Collection With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the Red Hat OpenShift Service on AWS web console dashboard for Produced Logs . ( LOG-3819 ) With this update, you can deploy multiple, isolated, and RBAC-protected ClusterLogForwarder custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. ( LOG-1343 ) Important In order to support multi-cluster log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations. With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the Logging and the output limits put a ceiling on the rate of logs shipped to a given data store. ( LOG-884 ) With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. ( LOG-4562 ) With this update, you can configure audit policies to control which Kubernetes and OpenShift API server events are forwarded by the log collector. ( LOG-3982 ) 1.2.5.2.2. Log Storage With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. ( LOG-3841 ) With this update, the Loki Operator introduces PodDisruptionBudget configuration on LokiStack deployments to ensure normal operations during Red Hat OpenShift Service on AWS cluster restarts by keeping ingestion and the query path available. ( LOG-3839 ) With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies. ( LOG-3840 ) With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. ( LOG-3266 ) With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for Red Hat OpenShift Service on AWS clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). ( LOG-4329 ) With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. ( LOG-4327 ) 1.2.5.2.3. Log Console With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. ( LOG-3856 ) With this update, Red Hat OpenShift Service on AWS application owners can receive notifications for application log-based alerts on the Red Hat OpenShift Service on AWS web console Developer perspective for Red Hat OpenShift Service on AWS version 4.14 and later. ( LOG-3548 ) 1.2.5.3. Known Issues Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the Red Hat OpenShift Logging Operator. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the RFC 5746 extension. As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end. Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. ( LOG-4609 ) Currently, when using FluentD as the collector, the collector pod cannot start on the Red Hat OpenShift Service on AWS IPv6-enabled cluster. The pod logs produce the fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known error. There is currently no workaround for this issue. ( LOG-4706 ) Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. ( LOG-4709 ) Currently, must-gather cannot gather any logs on a FIPS-enabled cluster, because the required OpenSSL library is not available in the cluster-logging-rhel9-operator . There is currently no workaround for this issue. ( LOG-4403 ) Currently, when deploying the logging version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in CrashLoopBackOff status, while using FluentD as a collector. There is currently no workaround for this issue. ( LOG-3933 ) 1.2.5.4. CVEs CVE-2023-40217 1.3. Logging 5.7 Note Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Service on AWS. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.3.1. Logging 5.7.8 This release includes OpenShift Logging Bug Fix Release 5.7.8 . 1.3.1.1. Bug fixes Before this update, there was a potential conflict when the same name was used for the outputRefs and inputRefs parameters in the ClusterLogForwarder custom resource (CR). As a result, the collector pods entered in a CrashLoopBackOff status. With this update, the output labels contain the OUTPUT_ prefix to ensure a distinction between output labels and pipeline names. ( LOG-4383 ) Before this update, while configuring the JSON log parser, if you did not set the structuredTypeKey or structuredTypeName parameters for the Cluster Logging Operator, no alert would display about an invalid configuration. With this update, the Cluster Logging Operator informs you about the configuration issue. ( LOG-4441 ) Before this update, if the hecToken key was missing or incorrect in the secret specified for a Splunk output, the validation failed because the Vector forwarded logs to Splunk without a token. With this update, if the hecToken key is missing or incorrect, the validation fails with the A non-empty hecToken entry is required error message. ( LOG-4580 ) Before this update, selecting a date from the Custom time range for logs caused an error in the web console. With this update, you can select a date from the time range model in the web console successfully. ( LOG-4684 ) 1.3.1.2. CVEs CVE-2023-40217 CVE-2023-44487 1.3.2. Logging 5.7.7 This release includes OpenShift Logging Bug Fix Release 5.7.7 . 1.3.2.1. Bug fixes Before this update, FluentD normalized the logs emitted by the EventRouter differently from Vector. With this update, the Vector produces log records in a consistent format. ( LOG-4178 ) Before this update, there was an error in the query used for the FluentD Buffer Availability graph in the metrics dashboard created by the Cluster Logging Operator as it showed the minimum buffer usage. With this update, the graph shows the maximum buffer usage and is now renamed to FluentD Buffer Usage . ( LOG-4555 ) Before this update, deploying a LokiStack on IPv6-only or dual-stack Red Hat OpenShift Service on AWS clusters caused the LokiStack memberlist registration to fail. As a result, the distributor pods went into a crash loop. With this update, an administrator can enable IPv6 by setting the lokistack.spec.hashRing.memberlist.enableIPv6: value to true , which resolves the issue. ( LOG-4569 ) Before this update, the log collector relied on the default configuration settings for reading the container log lines. As a result, the log collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the log collector to efficiently process rotated files. ( LOG-4575 ) Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. ( LOG-4686 ) 1.3.2.2. CVEs CVE-2023-0800 CVE-2023-0801 CVE-2023-0802 CVE-2023-0803 CVE-2023-0804 CVE-2023-2002 CVE-2023-3090 CVE-2023-3390 CVE-2023-3776 CVE-2023-4004 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4863 CVE-2023-4911 CVE-2023-5129 CVE-2023-20593 CVE-2023-29491 CVE-2023-30630 CVE-2023-35001 CVE-2023-35788 1.3.3. Logging 5.7.6 This release includes OpenShift Logging Bug Fix Release 5.7.6 . 1.3.3.1. Bug fixes Before this update, the collector relied on the default configuration settings for reading the container log lines. As a result, the collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the collector to efficiently process rotated files. ( LOG-4501 ) Before this update, when users pasted a URL with predefined filters, some filters did not reflect. With this update, the UI reflects all the filters in the URL. ( LOG-4459 ) Before this update, forwarding to Loki using custom labels generated an error when switching from Fluentd to Vector. With this update, the Vector configuration sanitizes labels in the same way as Fluentd to ensure the collector starts and correctly processes labels. ( LOG-4460 ) Before this update, the Observability Logs console search field did not accept special characters that it should escape. With this update, it is escaping special characters properly in the query. ( LOG-4456 ) Before this update, the following warning message appeared while sending logs to Splunk: Timestamp was not found. With this update, the change overrides the name of the log field used to retrieve the Timestamp and sends it to Splunk without warning. ( LOG-4413 ) Before this update, the CPU and memory usage of Vector was increasing over time. With this update, the Vector configuration now contains the expire_metrics_secs=60 setting to limit the lifetime of the metrics and cap the associated CPU usage and memory footprint. ( LOG-4171 ) Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4393 ) Before this update, the Fluentd runtime image included builder tools which were unnecessary at runtime. With this update, the builder tools are removed, resolving the issue. ( LOG-4467 ) 1.3.3.2. CVEs CVE-2023-3899 CVE-2023-4456 CVE-2023-32360 CVE-2023-34969 1.3.4. Logging 5.7.4 This release includes OpenShift Logging Bug Fix Release 5.7.4 . 1.3.4.1. Bug fixes Before this update, when forwarding logs to CloudWatch, a namespaceUUID value was not appended to the logGroupName field. With this update, the namespaceUUID value is included, so a logGroupName in CloudWatch appears as logGroupName: vectorcw.b443fb9e-bd4c-4b6a-b9d3-c0097f9ed286 . ( LOG-2701 ) Before this update, when forwarding logs over HTTP to an off-cluster destination, the Vector collector was unable to authenticate to the cluster-wide HTTP proxy even though correct credentials were provided in the proxy URL. With this update, the Vector log collector can now authenticate to the cluster-wide HTTP proxy. ( LOG-3381 ) Before this update, the Operator would fail if the Fluentd collector was configured with Splunk as an output, due to this configuration being unsupported. With this update, configuration validation rejects unsupported outputs, resolving the issue. ( LOG-4237 ) Before this update, when the Vector collector was updated an enabled = true value in the TLS configuration for AWS Cloudwatch logs and the GCP Stackdriver caused a configuration error. With this update, enabled = true value will be removed for these outputs, resolving the issue. ( LOG-4242 ) Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error has been resolved. ( LOG-4275 ) Before this update, an issue in the Loki Operator caused the alert-manager configuration for the application tenant to disappear if the Operator was configured with additional options for that tenant. With this update, the generated Loki configuration now contains both the custom and the auto-generated configuration. ( LOG-4361 ) Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. ( LOG-4368 ) Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana's Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. ( LOG-4389 ) Pipelines with no name field specified in the ClusterLogForwarder custom resource (CR) stopped working after upgrading to OpenShift Logging 5.7. With this update, the error has been resolved. ( LOG-4120 ) 1.3.4.2. CVEs CVE-2022-25883 CVE-2023-22796 1.3.5. Logging 5.7.3 This release includes OpenShift Logging Bug Fix Release 5.7.3 . 1.3.5.1. Bug fixes Before this update, when viewing logs within the Red Hat OpenShift Service on AWS web console, cached files caused the data to not refresh. With this update the bootstrap files are not cached, resolving the issue. ( LOG-4100 ) Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. ( LOG-4156 ) Before this update, the LokiStack ruler did not restart after changes were made to the RulerConfig custom resource (CR). With this update, the Loki Operator restarts the ruler pods after the RulerConfig CR is updated. ( LOG-4161 ) Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder . This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. ( LOG-4176 ) Before this update, the Loki Operator terminated unexpectedly when a LokiStack CR defined tenant limits, but not global limits. With this update, the Loki Operator can process LokiStack CRs without global limits, resolving the issue. ( LOG-4198 ) Before this update, Fluentd did not send logs to an Elasticsearch cluster when the private key provided was passphrase-protected. With this update, Fluentd properly handles passphrase-protected private keys when establishing a connection with Elasticsearch. ( LOG-4258 ) Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. ( LOG-4277 ) Before this update, label values containing a / character within the ClusterLogForwarder CR would cause the collector to terminate unexpectedly. With this update, slashes are replaced with underscores, resolving the issue. ( LOG-4095 ) Before this update, the Cluster Logging Operator terminated unexpectedly when set to an unmanaged state. With this update, a check to ensure that the ClusterLogging resource is in the correct Management state before initiating the reconciliation of the ClusterLogForwarder CR, resolving the issue. ( LOG-4177 ) Before this update, when viewing logs within the Red Hat OpenShift Service on AWS web console, selecting a time range by dragging over the histogram did not work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. ( LOG-4108 ) Before this update, when viewing logs within the Red Hat OpenShift Service on AWS web console, queries longer than 30 seconds timed out. With this update, the timeout value can be configured in the configmap/logging-view-plugin. ( LOG-3498 ) Before this update, when viewing logs within the Red Hat OpenShift Service on AWS web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. ( OU-188 ) Before this update, when viewing logs within the Red Hat OpenShift Service on AWS web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. ( OU-166 ) 1.3.5.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26115 CVE-2023-26136 CVE-2023-26604 CVE-2023-28466 1.3.6. Logging 5.7.2 This release includes OpenShift Logging Bug Fix Release 5.7.2 . 1.3.6.1. Bug fixes Before this update, it was not possible to delete the openshift-logging namespace directly due to the presence of a pending finalizer. With this update, the finalizer is no longer utilized, enabling direct deletion of the namespace. ( LOG-3316 ) Before this update, the run.sh script would display an incorrect chunk_limit_size value if it was changed according to the Red Hat OpenShift Service on AWS documentation. However, when setting the chunk_limit_size via the environment variable USDBUFFER_SIZE_LIMIT , the script would show the correct value. With this update, the run.sh script now consistently displays the correct chunk_limit_size value in both scenarios. ( LOG-3330 ) Before this update, the Red Hat OpenShift Service on AWS web console's logging view plugin did not allow for custom node placement or tolerations. This update adds the ability to define node placement and tolerations for the logging view plugin. ( LOG-3749 ) Before this update, the Cluster Logging Operator encountered an Unsupported Media Type exception when trying to send logs to DataDog via the Fluentd HTTP Plugin. With this update, users can seamlessly assign the content type for log forwarding by configuring the HTTP header Content-Type. The value provided is automatically assigned to the content_type parameter within the plugin, ensuring successful log transmission. ( LOG-3784 ) Before this update, when the detectMultilineErrors field was set to true in the ClusterLogForwarder custom resource (CR), PHP multi-line errors were recorded as separate log entries, causing the stack trace to be split across multiple messages. With this update, multi-line error detection for PHP is enabled, ensuring that the entire stack trace is included in a single log message. ( LOG-3878 ) Before this update, ClusterLogForwarder pipelines containing a space in their name caused the Vector collector pods to continuously crash. With this update, all spaces, dashes (-), and dots (.) in pipeline names are replaced with underscores (_). ( LOG-3945 ) Before this update, the log_forwarder_output metric did not include the http parameter. This update adds the missing parameter to the metric. ( LOG-3997 ) Before this update, Fluentd did not identify some multi-line JavaScript client exceptions when they ended with a colon. With this update, the Fluentd buffer name is prefixed with an underscore, resolving the issue. ( LOG-4019 ) Before this update, when configuring log forwarding to write to a Kafka output topic which matched a key in the payload, logs dropped due to an error. With this update, Fluentd's buffer name has been prefixed with an underscore, resolving the issue.( LOG-4027 ) Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. ( LOG-4049 ) Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true . With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator's CR: tls.verify_certificate = false tls.verify_hostname = false ( LOG-3445 ) Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to timeout. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. ( LOG-4052 ) Before this update, a prior fix to remove defaulting of the collection.type resulted in the Operator no longer honoring the deprecated specs for resource, node selections, and tolerations. This update modifies the Operator behavior to always prefer the collection.logs spec over those of collection . This varies from behavior that allowed using both the preferred fields and deprecated fields but would ignore the deprecated fields when collection.type was populated. ( LOG-4185 ) Before this update, the Vector log collector did not generate TLS configuration for forwarding logs to multiple Kafka brokers if the broker URLs were not specified in the output. With this update, TLS configuration is generated appropriately for multiple brokers. ( LOG-4163 ) Before this update, the option to enable passphrase for log forwarding to Kafka was unavailable. This limitation presented a security risk as it could potentially expose sensitive information. With this update, users now have a seamless option to enable passphrase for log forwarding to Kafka. ( LOG-3314 ) Before this update, Vector log collector did not honor the tlsSecurityProfile settings for outgoing TLS connections. After this update, Vector handles TLS connection settings appropriately. ( LOG-4011 ) Before this update, not all available output types were included in the log_forwarder_output_info metrics. With this update, metrics contain Splunk and Google Cloud Logging data which was missing previously. ( LOG-4098 ) Before this update, when follow_inodes was set to true , the Fluentd collector could crash on file rotation. With this update, the follow_inodes setting does not crash the collector. ( LOG-4151 ) Before this update, the Fluentd collector could incorrectly close files that should be watched because of how those files were tracked. With this update, the tracking parameters have been corrected. ( LOG-4149 ) Before this update, forwarding logs with the Vector collector and naming a pipeline in the ClusterLogForwarder instance audit , application or infrastructure resulted in collector pods staying in the CrashLoopBackOff state with the following error in the collector log: ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit After this update, pipeline names no longer clash with reserved input names, and pipelines can be named audit , application or infrastructure . ( LOG-4218 ) Before this update, when forwarding logs to a syslog destination with the Vector collector and setting the addLogSource flag to true , the following extra empty fields were added to the forwarded messages: namespace_name= , container_name= , and pod_name= . With this update, these fields are no longer added to journal logs. ( LOG-4219 ) Before this update, when a structuredTypeKey was not found, and a structuredTypeName was not specified, log messages were still parsed into structured object. With this update, parsing of logs is as expected. ( LOG-4220 ) 1.3.6.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-25147 CVE-2022-25265 CVE-2022-30594 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-22490 CVE-2023-23454 CVE-2023-23946 CVE-2023-25652 CVE-2023-25815 CVE-2023-27535 CVE-2023-29007 1.3.7. Logging 5.7.1 This release includes: OpenShift Logging Bug Fix Release 5.7.1 . 1.3.7.1. Bug fixes Before this update, the presence of numerous noisy messages within the Cluster Logging Operator pod logs caused reduced log readability, and increased difficulty in identifying important system events. With this update, the issue is resolved by significantly reducing the noisy messages within Cluster Logging Operator pod logs. ( LOG-3482 ) Before this update, the API server would reset the value for the CollectorSpec.Type field to vector , even when the custom resource used a different value. This update removes the default for the CollectorSpec.Type field to restore the behavior. ( LOG-4086 ) Before this update, a time range could not be selected in the Red Hat OpenShift Service on AWS web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. ( LOG-4501 ) Before this update, clicking on the Show Resources link in the Red Hat OpenShift Service on AWS web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the "Show Resources" link to toggle the display of resources for each log entry. ( LOG-3218 ) 1.3.7.2. CVEs CVE-2023-21930 CVE-2023-21937 CVE-2023-21938 CVE-2023-21939 CVE-2023-21954 CVE-2023-21967 CVE-2023-21968 CVE-2023-28617 1.3.8. Logging 5.7.0 This release includes OpenShift Logging Bug Fix Release 5.7.0 . 1.3.8.1. Enhancements With this update, you can enable logging to detect multi-line exceptions and reassemble them into a single log entry. To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true . 1.3.8.2. Known Issues None. 1.3.8.3. Bug fixes Before this update, the nodeSelector attribute for the Gateway component of the LokiStack did not impact node scheduling. With this update, the nodeSelector attribute works as expected. ( LOG-3713 ) 1.3.8.4. CVEs CVE-2023-1999 CVE-2023-28617 | [
"tls.verify_certificate = false tls.verify_hostname = false",
"ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/release-notes |
Chapter 7. Managing domains | Chapter 7. Managing domains Identity Service (keystone) domains are additional namespaces that you can create in keystone. Use keystone domains to partition users, groups, and projects. You can also configure these separate domains to authenticate users in different LDAP or Active Directory environments. For more information, see the Integrate with Identity Service guide. Note Identity Service includes a built-in domain called Default . It is suggested you reserve this domain only for service accounts, and create a separate domain for user accounts. 7.1. Viewing a list of domains You can view a list of domains with the openstack domain list command: 7.2. Creating a new domain You can create a new domain with the openstack domain create command: 7.3. Viewing the details of a domain You can view the details of a domain with the openstack domain show command: 7.4. Disabling a domain You can disable and enable domains according to your requirements. Procedure Disable a domain using the --disable option: Confirm that the domain has been disabled: Use the --enable option to re-enable the domain, if required: | [
"openstack domain list +----------------------------------+------------------+---------+--------------------+ | ID | Name | Enabled | Description | +----------------------------------+------------------+---------+--------------------+ | 3abefa6f32c14db9a9703bf5ce6863e1 | TestDomain | True | | | 69436408fdcb44ab9e111691f8e9216d | corp | True | | | a4f61a8feb8d4253b260054c6aa41adb | federated_domain | True | | | default | Default | True | The default domain | +----------------------------------+------------------+---------+--------------------+",
"openstack domain create TestDomain +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+",
"openstack domain show TestDomain +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+",
"openstack domain set TestDomain --disable",
"openstack domain show TestDomain +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | False | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+",
"openstack domain set TestDomain --enable"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/users_and_identity_management_guide/assembly_domains |
Chapter 5. Ceph File System administration | Chapter 5. Ceph File System administration As a storage administrator, you can perform common Ceph File System (CephFS) administrative tasks, such as: Monitoring CephFS metrics in real-time, see Section 5.1, "Using the cephfs-top utility" Mapping a directory to a particular MDS rank, see Section 5.5, "Mapping directory trees to Metadata Server daemon ranks" . Disassociating a directory from a MDS rank, see Section 5.6, "Disassociating directory trees from Metadata Server daemon ranks" . Adding a new data pool, see Section 5.7, "Adding data pools" . Working with quotas, see Chapter 6, Ceph File System quotas . Working with files and directory layouts, see Chapter 7, File and directory layouts . Removing a Ceph File System, see Section 5.9, "Removing a Ceph File System" . Client features, see Section 5.11, "Client features" . Using the ceph mds fail command, see Section 5.10, "Using the ceph mds fail command" . Manually evict a CephFS client, see Section 5.14, "Manually evicting a Ceph File System client" Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemons ( ceph-mds ). Create and mount a Ceph File System. 5.1. Using the cephfs-top utility The Ceph File System (CephFS) provides a top -like utility to display metrics on Ceph File Systems in realtime. The cephfs-top utility is a curses -based Python script that uses the Ceph Manager stats module to fetch and display client performance metrics. Currently, the cephfs-top utility supports nearly 10k clients. Note Currently, not all of the performance stats are available in the Red Hat Enterprise Linux 8 kernel. cephfs-top is supported on Red Hat Enterprise Linux 8 and above and uses one of the standard terminals in Red Hat Enterprise Linux. Important The minimum compatible python version for cephfs-top utility is 3.6.0. Prerequisites A healthy and running Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph client node. Installation of the cephfs-top package. Procedure Enable the Red Hat Ceph Storage 6 tools repository, if it is not already enabled: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the cephfs-top package: Example Enable the Ceph Manager stats plugin: Example Create the client.fstop Ceph user: Example Note Optionally, use the --id argument to specify a different Ceph user, other than client.fstop . Start the cephfs-top utility: Example 5.1.1. The cephfs-top utility interactive commands Select a particular file system and view the metrics related to that file system with the cephfs-top utility interactive commands. m Description Filesystem selection: Displays a menu of file systems for selection. q Description Quit: Exits the utility if you are at the home screen with all file system information. If you are not at the home screen, it redirects you back to the home screen. s Description Sort field selection: Designates the sort field. 'cap_hit' is the default. l Description Client limit: Sets the limit on the number of clients to be displayed. r Description Reset: Resets the sort field and limit value to the default. The metrics display can be scrolled using the Arrow Keys, PgUp/PgDn, Home/End and mouse. Example of entering and exiting the file system selection menu 5.1.2. The cephfs-top utility options You can use the cephfs-top utility command with various options. Example --cluster NAME_OF_THE_CLUSTER Description With this option, you can connect to the non-default cluster name. The default name is ceph . --id USER Description This is a client which connects to the Ceph cluster and is fstop by default. --selftest Description With this option, you can perform a selftest. This mode performs a sanity check of stats module. --conffile PATH_TO_THE_CONFIGURATION_FILE Description With this option, you can provide a path to the Ceph cluster configuration file. -d/--delay INTERVAL_IN_SECONDS Description The cephfs-top utility refreshes statistics every second by default. With this option, you can change a refresh interval. Note Interval should be greater than or equal to 1 seconds. Fractional seconds are honored. --dump Description With this option, you can dump the metrics to stdout without creating a curses display use. --dumpfs FILESYSTEM_NAME Description With this option, you can dump the metrics of the given filesystem to stdout without creating a curses display use. 5.2. Using the MDS autoscaler module The MDS Autoscaler Module monitors the Ceph File System (CephFS) to ensure sufficient MDS daemons are available. It works by adjusting the placement specification for the Orchestrator backend of the MDS service. The module monitors the following file system settings to inform placement count adjustments: max_mds file system setting standby_count_wanted file system setting The Ceph monitor daemons are still responsible for promoting or stopping MDS according to these settings. The mds_autoscaler simply adjusts the number of MDS which are spawned by the orchestrator. Prerequisites A healthy and running Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Monitor node. Procedure Enable the MDS autoscaler module: Example 5.3. Unmounting Ceph File Systems mounted as kernel clients How to unmount a Ceph File System that is mounted as a kernel client. Prerequisites Root-level access to the node doing the mounting. Procedure To unmount a Ceph File System mounted as a kernel client: Syntax Example Additional Resources The umount(8) manual page 5.4. Unmounting Ceph File Systems mounted as FUSE clients Unmounting a Ceph File System that is mounted as a File System in User Space (FUSE) client. Prerequisites Root-level access to the FUSE client node. Procedure To unmount a Ceph File System mounted in FUSE: Syntax Example Additional Resources The ceph-fuse(8) manual page 5.5. Mapping directory trees to Metadata Server daemon ranks You can map a directory and its subdirectories to a particular active Metadata Server (MDS) rank so that its metadata is only managed by the MDS daemon holding that rank. This approach enables you to evenly spread application load or the limit impact of users' metadata requests to the entire storage cluster. Important An internal balancer already dynamically spreads the application load. Therefore, only map directory trees to ranks for certain carefully chosen applications. In addition, when a directory is mapped to a rank, the balancer cannot split it. Consequently, a large number of operations within the mapped directory can overload the rank and the MDS daemon that manages it. Prerequisites At least two active MDS daemons. User access to the CephFS client node. Verify that the attr package is installed on the CephFS client node with a mounted Ceph File System. Procedure Add the p flag to the Ceph user's capabilities: Syntax Example Set the ceph.dir.pin extended attribute on a directory: Syntax Example This example assigns the /temp directory and all of its subdirectories to rank 2. Additional Resources See the Layout, quota, snapshot, and network restrictions section in the Red Hat Ceph Storage File System Guide for more details about the p flag. See the Manually pinning directory trees to a particular rank section in the Red Hat Ceph Storage File System Guide for more details. See the Configuring multiple active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide for more details. 5.6. Disassociating directory trees from Metadata Server daemon ranks Disassociate a directory from a particular active Metadata Server (MDS) rank. Prerequisites User access to the Ceph File System (CephFS) client node. Ensure that the attr package is installed on the client node with a mounted CephFS. Procedure Set the ceph.dir.pin extended attribute to -1 on a directory: Syntax Example Note Any separately mapped subdirectories of /home/ceph-user/ are not affected. Additional Resources See the Mapping directory trees to Metadata Server daemon ranks section in Red Hat Ceph Storage File System Guide for more details. 5.7. Adding data pools The Ceph File System (CephFS) supports adding more than one pool to be used for storing data. This can be useful for: Storing log data on reduced redundancy pools. Storing user home directories on an SSD or NVMe pool. Basic data segregation. Before using another data pool in the Ceph File System, you must add it as described in this section. By default, for storing file data, CephFS uses the initial data pool that was specified during its creation. To use a secondary data pool, you must also configure a part of the file system hierarchy to store file data in that pool or optionally within a namespace of that pool, using file and directory layouts. Prerequisites Root-level access to the Ceph Monitor node. Procedure Create a new data pool: Syntax Replace: POOL_NAME with the name of the pool. Example Add the newly created pool under the control of the Metadata Servers: Syntax Replace: FS_NAME with the name of the file system. POOL_NAME with the name of the pool. Example: Verify that the pool was successfully added: Example Optional: Remove a data pool from the file system: Syntax Example: Verify that the pool was successfully removed: Example If you use the cephx authentication, make sure that clients can access the new pool. Additional Resources See the File and directory layouts section in the Red Hat Ceph Storage File System Guide for details. See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details. 5.8. Taking down a Ceph File System cluster You can take down Ceph File System (CephFS) cluster by setting the down flag to true . Doing this gracefully shuts down the Metadata Server (MDS) daemons by flushing journals to the metadata pool and stopping all client I/O. You can also take the CephFS cluster down quickly to test the deletion of a file system and bring the Metadata Server (MDS) daemons down, for example, when practicing a disaster recovery scenario. Doing this sets the jointable flag to prevent the MDS standby daemons from activating the file system. Prerequisites Root-level access to a Ceph Monitor node. Procedure To mark the CephFS cluster down: Syntax Example To bring the CephFS cluster back up: Syntax Example or To quickly take down a CephFS cluster: Syntax Example Note To get the CephFS cluster back up, set cephfs to joinable : Syntax Example 5.9. Removing a Ceph File System You can remove a Ceph File System (CephFS). Before doing so, consider backing up all the data and verifying that all clients have unmounted the file system locally. Warning This operation is destructive and will make the data stored on the Ceph File System permanently inaccessible. Prerequisites Back up your data. Root-level access to a Ceph Monitor node. Procedure Mark the storage cluster as down: Syntax Replace FS_NAME with the name of the Ceph File System you want to remove. Example Display the status of the Ceph File System: Example Remove the Ceph File System: Syntax Replace FS_NAME with the name of the Ceph File System you want to remove. Example Verify that the file system has been successfully removed: Example Optional. Remove data and metadata pools associated with the removed file system. Additional Resources See the Delete a Pool section in the Red Hat Ceph Storage Storage Strategies Guide . 5.10. Using the ceph mds fail command Use the ceph mds fail command to: Mark a MDS daemon as failed. If the daemon was active and a suitable standby daemon was available, and if the standby daemon was active after disabling the standby-replay configuration, using this command forces a failover to the standby daemon. By disabling the standby-replay daemon, this prevents new standby-replay daemons from being assigned. Restart a running MDS daemon. If the daemon was active and a suitable standby daemon was available, the "failed" daemon becomes a standby daemon. Prerequisites Installation and configuration of the Ceph MDS daemons. Procedure To fail a daemon: Syntax Where MDS_NAME is the name of the standby-replay MDS node. Example Note You can find the Ceph MDS name from the ceph fs status command. Additional Resources See the Decreasing the number of active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide . See the Configuring the number of standby daemons section in the Red Hat Ceph Storage File System Guide . See the Metadata Server ranks section in the Red Hat Ceph Storage File System Guide . 5.11. Client features At times you might want to set Ceph File System (CephFS) features that clients must support to enable them to use Ceph File Systems. Clients without these features might disrupt other CephFS clients, or behave in unexpected ways. Also, you might want to require new features to prevent older, and possibly buggy clients from connecting to a Ceph File System. Important CephFS clients missing newly added features are evicted automatically. You can list all the CephFS features by using the fs features ls command. You can add or remove requirements by using the fs required_client_features command. Syntax Feature Descriptions reply_encoding Description The Ceph Metadata Server (MDS) encodes reply requests in extensible format, if the client supports this feature. reclaim_client Description The Ceph MDS allows a new client to reclaim another, perhaps a dead, client's state. This feature is used by NFS Ganesha. lazy_caps_wanted Description When a stale client resumes, the Ceph MDS only needs to re-issue the capabilities that are explicitly wanted, if the client supports this feature. multi_reconnect Description After a Ceph MDS failover event, the client sends a reconnect message to the MDS to reestablish cache states. A client can split large reconnect messages into multiple messages. deleg_ino Description A Ceph MDS delegates inode numbers to a client, if the client supports this feature. Delegating inode numbers is a prerequisite for a client to do async file creation. metric_collect Description CephFS clients can send performance metrics to a Ceph MDS. alternate_name Description CephFS clients can set and understand alternate names for directory entries. This feature allows for encrypted file names. 5.12. Ceph File System client evictions When a Ceph File System (CephFS) client is unresponsive or misbehaving, it might be necessary to forcibly terminate, or evict it from accessing the CephFS. Evicting a CephFS client prevents it from communicating further with Metadata Server (MDS) daemons and Ceph OSD daemons. If a CephFS client is buffering I/O to the CephFS at the time of eviction, then any un-flushed data will be lost. The CephFS client eviction process applies to all client types: FUSE mounts, kernel mounts, NFS gateways, and any process using libcephfs API library. You can evict CephFS clients automatically, if they fail to communicate promptly with the MDS daemon, or manually. Automatic Client Eviction These scenarios cause an automatic CephFS client eviction: If a CephFS client has not communicated with the active MDS daemon for over the default of 300 seconds, or as set by the session_autoclose option. If the mds_cap_revoke_eviction_timeout option is set, and a CephFS client has not responded to the cap revoke messages for over the set amount of seconds. The mds_cap_revoke_eviction_timeout option is disabled by default. During MDS startup or failover, the MDS daemon goes through a reconnect phase waiting for all the CephFS clients to connect to the new MDS daemon. If any CephFS clients fail to reconnect within the default time window of 45 seconds, or as set by the mds_reconnect_timeout option. Additional Resources See the Manually evicting a Ceph File System client section in the Red Hat Ceph Storage File System Guide for more details. 5.13. Blocklist Ceph File System clients Ceph File System (CephFS) client blocklisting is enabled by default. When you send an eviction command to a single Metadata Server (MDS) daemon, it propagates the blocklist to the other MDS daemons. This is to prevent the CephFS client from accessing any data objects, so it is necessary to update the other CephFS clients, and MDS daemons with the latest Ceph OSD map, which includes the blocklisted client entries. An internal "osdmap epoch barrier" mechanism is used when updating the Ceph OSD map. The purpose of the barrier is to verify the CephFS clients receiving the capabilities have a sufficiently recent Ceph OSD map, before any capabilities are assigned that might allow access to the same RADOS objects, as to not race with canceled operations, such as, from ENOSPC or blocklisted clients from evictions. If you are experiencing frequent CephFS client evictions due to slow nodes or an unreliable network, and you cannot fix the underlying issue, then you can ask the MDS to be less strict. It is possible to respond to slow CephFS clients by simply dropping their MDS sessions, but permit the CephFS client to re-open sessions and to continue talking to Ceph OSDs. By setting the mds_session_blocklist_on_timeout and mds_session_blocklist_on_evict options to false enables this mode. Note When blocklisting is disabled, the evicted CephFS client has only an effect on the MDS daemon you send the command to. On a system with multiple active MDS daemons, you need to send an eviction command to each active daemon. 5.14. Manually evicting a Ceph File System client You might want to manually evict a Ceph File System (CephFS) client, if the client is misbehaving and you do not have access to the client node, or if a client dies, and you do not want to wait for the client session to time out. Prerequisites Root-level access to the Ceph Monitor node. Procedure Review the client list: Syntax Example Evict the specified CephFS client: Syntax Example 5.15. Removing a Ceph File System client from the blocklist In some situations, it can be useful to allow a previously blocklisted Ceph File System (CephFS) client to reconnect to the storage cluster. Important Removing a CephFS client from the blocklist puts data integrity at risk, and does not guarantee a fully healthy, and functional CephFS client as a result. The best way to get a fully healthy CephFS client back after an eviction, is to unmount the CephFS client and do a fresh mount. If other CephFS clients are accessing files that the blocklisted CephFS client was buffering I/O to, it can result in data corruption. Prerequisites Root-level access to the Ceph Monitor node. Procedure Review the blocklist: Example Remove the CephFS client from the blocklist: Syntax Example Optionally, you can have kernel-based CephFS clients automatically reconnect when removing them from the blocklist. On the kernel-based CephFS client, set the following option to clean either when doing a manual mount, or automatically mounting with an entry in the /etc/fstab file: Optionally, you can have FUSE-based CephFS clients automatically reconnect when removing them from the blocklist. On the FUSE client, set the following option to true either when doing a manual mount, or automatically mounting with an entry in the /etc/fstab file: Additional Resources See the Mounting the Ceph File System as a FUSE client section in the Red Hat Ceph Storage File System Guide for more information. | [
"subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms",
"dnf install cephfs-top",
"ceph mgr module enable stats",
"ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring",
"cephfs-top cephfs-top - Wed Nov 30 15:26:05 2022 All Filesystem Info Total Client(s): 4 - 3 FUSE, 1 kclient, 0 libcephfs COMMANDS: m - select a filesystem | s - sort menu | l - limit number of clients | r - reset to default | q - quit client_id mount_root chit(%) dlease(%) ofiles oicaps oinodes rtio(MB) raio(MB) rsp(MB/s) wtio(MB) waio(MB) wsp(MB/s) rlatavg(ms) rlatsd(ms) wlatavg(ms) wlatsd(ms) mlatavg(ms) mlatsd(ms) mount_point@host/addr Filesystem: cephfs1 - 2 client(s) 4500 / 100.0 100.0 0 751 0 0.0 0.0 0.0 578.13 0.03 0.0 N/A N/A N/A N/A N/A N/A N/A@example/192.168.1.4 4501 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.41 0.0 /mnt/cephfs2@example/192.168.1.4 Filesystem: cephfs2 - 2 client(s) 4512 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.0 /mnt/cephfs3@example/192.168.1.4 4518 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.52 0.0 /mnt/cephfs4@example/192.168.1.4",
"m Filesystems Press \"q\" to go back to home (all filesystem info) screen cephfs01 cephfs02 q cephfs-top - Thu Oct 20 07:29:35 2022 Total Client(s): 3 - 2 FUSE, 1 kclient, 0 libcephfs",
"cephfs-top --selftest selftest ok",
"ceph mgr module enable mds_autoscaler",
"umount MOUNT_POINT",
"umount /mnt/cephfs",
"fusermount -u MOUNT_POINT",
"fusermount -u /mnt/cephfs",
"ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ]",
"[user@client ~]USD ceph fs authorize cephfs_a client.1 /temp rwp client.1 key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps: [mds] allow r, allow rwp path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a",
"setfattr -n ceph.dir.pin -v RANK DIRECTORY",
"[user@client ~]USD setfattr -n ceph.dir.pin -v 2 /temp",
"setfattr -n ceph.dir.pin -v -1 DIRECTORY",
"[user@client ~]USD setfattr -n ceph.dir.pin -v -1 /home/ceph-user",
"ceph osd pool create POOL_NAME",
"ceph osd pool create cephfs_data_ssd pool 'cephfs_data_ssd' created",
"ceph fs add_data_pool FS_NAME POOL_NAME",
"ceph fs add_data_pool cephfs cephfs_data_ssd added data pool 6 to fsmap",
"ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]",
"ceph fs rm_data_pool FS_NAME POOL_NAME",
"ceph fs rm_data_pool cephfs cephfs_data_ssd removed data pool 6 from fsmap",
"ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs.cephfs.data]",
"ceph fs set FS_NAME down true",
"ceph fs set cephfs down true",
"ceph fs set FS_NAME down false",
"ceph fs set cephfs down false",
"ceph fs fail FS_NAME",
"ceph fs fail cephfs",
"ceph fs set FS_NAME joinable true",
"ceph fs set cephfs joinable true cephfs marked joinable; MDS may join as newly active.",
"ceph fs set FS_NAME down true",
"ceph fs set cephfs down true cephfs marked down.",
"ceph fs status",
"ceph fs status cephfs - 0 clients ====== +-------------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+------------+-------+-------+ |cephfs.cephfs.meta | metadata | 31.5M | 52.6G| |cephfs.cephfs.data | data | 0 | 52.6G| +-----------------+----------+-------+---------+ STANDBY MDS cephfs.ceph-host01 cephfs.ceph-host02 cephfs.ceph-host03",
"ceph fs rm FS_NAME --yes-i-really-mean-it",
"ceph fs rm cephfs --yes-i-really-mean-it",
"ceph fs ls",
"ceph mds fail MDS_NAME",
"ceph mds fail example01",
"fs required_client_features FILE_SYSTEM_NAME add FEATURE_NAME fs required_client_features FILE_SYSTEM_NAME rm FEATURE_NAME",
"ceph tell DAEMON_NAME client ls",
"ceph tell mds.0 client ls [ { \"id\": 4305, \"num_leases\": 0, \"num_caps\": 3, \"state\": \"open\", \"replay_requests\": 0, \"completed_requests\": 0, \"reconnecting\": false, \"inst\": \"client.4305 172.21.9.34:0/422650892\", \"client_metadata\": { \"ceph_sha1\": \"79f0367338897c8c6d9805eb8c9ad24af0dcd9c7\", \"ceph_version\": \"ceph version 16.2.8-65.el8cp (79f0367338897c8c6d9805eb8c9ad24af0dcd9c7)\", \"entity_id\": \"0\", \"hostname\": \"senta04\", \"mount_point\": \"/tmp/tmpcMpF1b/mnt.0\", \"pid\": \"29377\", \"root\": \"/\" } } ]",
"ceph tell DAEMON_NAME client evict id= ID_NUMBER",
"ceph tell mds.0 client evict id=4305",
"ceph osd blocklist ls listed 1 entries 127.0.0.1:0/3710147553 2022-05-09 11:32:24.716146",
"ceph osd blocklist rm CLIENT_NAME_OR_IP_ADDR",
"ceph osd blocklist rm 127.0.0.1:0/3710147553 un-blocklisting 127.0.0.1:0/3710147553",
"recover_session=clean",
"client_reconnect_stale=true"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/file_system_guide/ceph-file-system-administration |
Chapter 368. Jetty Websocket Component | Chapter 368. Jetty Websocket Component Available as of Camel version 2.10 The websocket component provides websocket endpoints for communicating with clients using websocket. The component uses Eclipse Jetty Server which implements the IETF specification (drafts and RFC 6455). It supports the protocols ws:// and wss://. To use wss:// protocol, the SSLContextParameters must be defined. Version currently supported Camel 2.18 uses Jetty 9 368.1. URI format websocket://hostname[:port][/resourceUri][?options] You can append query options to the URI in the following format, ?option=value&option=value&... 368.2. Websocket Options The Jetty Websocket component supports 14 options, which are listed below. Name Description Default Type staticResources (consumer) Set a resource path for static resources (such as .html files etc). The resources can be loaded from classpath, if you prefix with classpath:, otherwise the resources is loaded from file system or from JAR files. For example to load from root classpath use classpath:., or classpath:WEB-INF/static If not configured (eg null) then no static resource is in use. String host (common) The hostname. The default value is 0.0.0.0 0.0.0.0 String port (common) The port number. The default value is 9292 9292 Integer sslKeyPassword (security) The password for the keystore when using SSL. String sslPassword (security) The password when using SSL. String sslKeystore (security) The path to the keystore. String enableJmx (advanced) If this option is true, Jetty JMX support will be enabled for this endpoint. See Jetty JMX support for more details. false boolean minThreads (advanced) To set a value for minimum number of threads in server thread pool. MaxThreads/minThreads or threadPool fields are required due to switch to Jetty9. The default values for minThreads is 1. Integer maxThreads (advanced) To set a value for maximum number of threads in server thread pool. MaxThreads/minThreads or threadPool fields are required due to switch to Jetty9. The default values for maxThreads is 1 2 noCores. Integer threadPool (advanced) To use a custom thread pool for the server. MaxThreads/minThreads or threadPool fields are required due to switch to Jetty9. ThreadPool sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean socketFactory (common) To configure a map which contains custom WebSocketFactory for sub protocols. The key in the map is the sub protocol. The default key is reserved for the default implementation. Map resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Jetty Websocket endpoint is configured using URI syntax: with the following path and query parameters: 368.2.1. Path Parameters (3 parameters): Name Description Default Type host The hostname. The default value is 0.0.0.0. Setting this option on the component will use the component configured value as default. 0.0.0.0 String port The port number. The default value is 9292. Setting this option on the component will use the component configured value as default. 9292 Integer resourceUri Required Name of the websocket channel to use String 368.2.2. Query Parameters (18 parameters): Name Description Default Type maxBinaryMessageSize (common) Can be used to set the size in bytes that the websocket created by the websocketServlet may be accept before closing. (Default is -1 - or unlimited) -1 Integer bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sessionSupport (consumer) Whether to enable session support which enables HttpSession for each http request. false boolean staticResources (consumer) Set a resource path for static resources (such as .html files etc). The resources can be loaded from classpath, if you prefix with classpath:, otherwise the resources is loaded from file system or from JAR files. For example to load from root classpath use classpath:., or classpath:WEB-INF/static If not configured (eg null) then no static resource is in use. String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern sendTimeout (producer) Timeout in millis when sending to a websocket channel. The default timeout is 30000 (30 seconds). 30000 Integer sendToAll (producer) To send to all websocket subscribers. Can be used to configure on endpoint level, instead of having to use the WebsocketConstants.SEND_TO_ALL header on the message. Boolean bufferSize (advanced) Set the buffer size of the websocketServlet, which is also the max frame byte size (default 8192) 8192 Integer maxIdleTime (advanced) Set the time in ms that the websocket created by the websocketServlet may be idle before closing. (default is 300000) 300000 Integer maxTextMessageSize (advanced) Can be used to set the size in characters that the websocket created by the websocketServlet may be accept before closing. Integer minVersion (advanced) Can be used to set the minimum protocol version accepted for the websocketServlet. (Default 13 - the RFC6455 version) 13 Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean allowedOrigins (cors) The CORS allowed origins. Use to allow all. String crossOriginFilterOn (cors) Whether to enable CORS false boolean filterPath (cors) Context path for filtering CORS String enableJmx (monitoring) If this option is true, Jetty JMX support will be enabled for this endpoint. See Jetty JMX support for more details. false boolean sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters 368.3. Spring Boot Auto-Configuration The component supports 15 options, which are listed below. Name Description Default Type camel.component.websocket.enable-jmx If this option is true, Jetty JMX support will be enabled for this endpoint. See Jetty JMX support for more details. false Boolean camel.component.websocket.enabled Enable websocket component true Boolean camel.component.websocket.host The hostname. The default value is 0.0.0.0 0.0.0.0 String camel.component.websocket.max-threads To set a value for maximum number of threads in server thread pool. MaxThreads/minThreads or threadPool fields are required due to switch to Jetty9. The default values for maxThreads is 1 2 noCores. Integer camel.component.websocket.min-threads To set a value for minimum number of threads in server thread pool. MaxThreads/minThreads or threadPool fields are required due to switch to Jetty9. The default values for minThreads is 1. Integer camel.component.websocket.port The port number. The default value is 9292 9292 Integer camel.component.websocket.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.websocket.socket-factory To configure a map which contains custom WebSocketFactory for sub protocols. The key in the map is the sub protocol. The default key is reserved for the default implementation. Map camel.component.websocket.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.websocket.ssl-key-password The password for the keystore when using SSL. String camel.component.websocket.ssl-keystore The path to the keystore. String camel.component.websocket.ssl-password The password when using SSL. String camel.component.websocket.static-resources Set a resource path for static resources (such as .html files etc). The resources can be loaded from classpath, if you prefix with classpath:, otherwise the resources is loaded from file system or from JAR files. For example to load from root classpath use classpath:., or classpath:WEB-INF/static If not configured (eg null) then no static resource is in use. String camel.component.websocket.thread-pool To use a custom thread pool for the server. MaxThreads/minThreads or threadPool fields are required due to switch to Jetty9. The option is a org.eclipse.jetty.util.thread.ThreadPool type. String camel.component.websocket.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 368.4. Message Headers The websocket component uses 2 headers to indicate to either send messages back to a single/current client, or to all clients. WebsocketConstants.SEND_TO_ALL Sends the message to all clients which are currently connected. You can use the sendToAll option on the endpoint instead of using this header. WebsocketConstants.CONNECTION_KEY Sends the message to the client with the given connection key. WebsocketConstants.REMOTE_ADDRESS Remote address of the websocket session. 368.5. Usage In this example we let Camel exposes a websocket server which clients can communicate with. The websocket server uses the default host and port, which would be 0.0.0.0:9292 . The example will send back an echo of the input. To send back a message, we need to send the transformed message to the same endpoint "websocket://echo" . This is needed because by default the messaging is InOnly. This example is part of an unit test, which you can find here . As a client we use the AHC library which offers support for web socket as well. Here is another example where webapp resources location have been defined to allow the Jetty Application Server to not only register the WebSocket servlet but also to expose web resources for the browser. Resources should be defined under the webapp directory. from("activemq:topic:newsTopic") .routeId("fromJMStoWebSocket") .to("websocket://localhost:8443/newsTopic?sendToAll=true&staticResources=classpath:webapp"); 368.6. Setting up SSL for WebSocket Component 368.6.1. Using the JSSE Configuration Utility As of Camel 2.10, the WebSocket component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the Cometd component. Programmatic configuration of the component KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); TrustManagersParameters tmp = new TrustManagersParameters(); tmp.setKeyStore(ksp); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); scp.setTrustManagers(tmp); CometdComponent commetdComponent = getContext().getComponent("cometds", CometdComponent.class); commetdComponent.setSslContextParameters(scp); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> <camel:trustManagers> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:trustManagers> </camel:sslContextParameters>... ... <to uri="websocket://127.0.0.1:8443/test?sslContextParameters=#sslContextParameters"/>... Java DSL based configuration of endpoint ... protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { public void configure() { String uri = "websocket://127.0.0.1:8443/test?sslContextParameters=#sslContextParameters"; from(uri) .log(">>> Message received from WebSocket Client : USD{body}") .to("mock:client") .loop(10) .setBody().constant(">> Welcome on board!") .to(uri); ... 368.7. See Also Configuring Camel Component Endpoint Getting Started AHC Jetty Twitter Websocket Example demonstrates how to poll a constant feed of twitter searches and publish results in real time using web socket to a web page. | [
"websocket://hostname[:port][/resourceUri][?options]",
"websocket:host:port/resourceUri",
"from(\"activemq:topic:newsTopic\") .routeId(\"fromJMStoWebSocket\") .to(\"websocket://localhost:8443/newsTopic?sendToAll=true&staticResources=classpath:webapp\");",
"KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/keystore.jks\"); ksp.setPassword(\"keystorePassword\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"keyPassword\"); TrustManagersParameters tmp = new TrustManagersParameters(); tmp.setKeyStore(ksp); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); scp.setTrustManagers(tmp); CometdComponent commetdComponent = getContext().getComponent(\"cometds\", CometdComponent.class); commetdComponent.setSslContextParameters(scp);",
"<camel:sslContextParameters id=\"sslContextParameters\"> <camel:keyManagers keyPassword=\"keyPassword\"> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> <camel:trustManagers> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:trustManagers> </camel:sslContextParameters> <to uri=\"websocket://127.0.0.1:8443/test?sslContextParameters=#sslContextParameters\"/>",
"protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { public void configure() { String uri = \"websocket://127.0.0.1:8443/test?sslContextParameters=#sslContextParameters\"; from(uri) .log(\">>> Message received from WebSocket Client : USD{body}\") .to(\"mock:client\") .loop(10) .setBody().constant(\">> Welcome on board!\") .to(uri);"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/websocket-component |
3.7. Backing Up and Restoring XFS File Systems | 3.7. Backing Up and Restoring XFS File Systems XFS file system backup and restoration involve these utilities: xfsdump for creating the backup xfsrestore for restoring from backup 3.7.1. Features of XFS Backup and Restoration Backup You can use the xfsdump utility to: Perform backups to regular file images. Only one backup can be written to a regular file. Perform backups to tape drives. The xfsdump utility also allows you to write multiple backups to the same tape. A backup can span multiple tapes. To back up multiple file systems to a single tape device, simply write the backup to a tape that already contains an XFS backup. This appends the new backup to the one. By default, xfsdump never overwrites existing backups. Create incremental backups. The xfsdump utility uses dump levels to determine a base backup to which other backups are relative. Numbers from 0 to 9 refer to increasing dump levels. An incremental backup only backs up files that have changed since the last dump of a lower level: To perform a full backup, perform a level 0 dump on the file system. A level 1 dump is the first incremental backup after a full backup. The incremental backup would be level 2, which only backs up files that have changed since the last level 1 dump; and so on, to a maximum of level 9. Exclude files from a backup using size, subtree, or inode flags to filter them. Restoration The xfsrestore utility restores file systems from backups produced by xfsdump . The xfsrestore utility has two modes: The simple mode enables users to restore an entire file system from a level 0 dump. This is the default mode. The cumulative mode enables file system restoration from an incremental backup: that is, level 1 to level 9. A unique session ID or session label identifies each backup. Restoring a backup from a tape containing multiple backups requires its corresponding session ID or label. To extract, add, or delete specific files from a backup, enter the xfsrestore interactive mode. The interactive mode provides a set of commands to manipulate the backup files. 3.7.2. Backing Up an XFS File System This procedure describes how to back up the content of an XFS file system into a file or a tape. Procedure 3.1. Backing Up an XFS File System Use the following command to back up an XFS file system: Replace level with the dump level of your backup. Use 0 to perform a full backup or 1 to 9 to perform consequent incremental backups. Replace backup-destination with the path where you want to store your backup. The destination can be a regular file, a tape drive, or a remote tape device. For example, /backup-files/Data.xfsdump for a file or /dev/st0 for a tape drive. Replace path-to-xfs-filesystem with the mount point of the XFS file system you want to back up. For example, /mnt/data/ . The file system must be mounted. When backing up multiple file systems and saving them on a single tape device, add a session label to each backup using the -L label option so that it is easier to identify them when restoring. Replace label with any name for your backup: for example, backup_data . Example 3.4. Backing up Multiple XFS File Systems To back up the content of XFS file systems mounted on the /boot/ and /data/ directories and save them as files in the /backup-files/ directory: To back up multiple file systems on a single tape device, add a session label to each backup using the -L label option: Additional Resources For more information about backing up XFS file systems, see the xfsdump (8) man page. 3.7.3. Restoring an XFS File System from Backup This procedure describes how to restore the content of an XFS file system from a file or tape backup. Prerequisites You need a file or tape backup of XFS file systems, as described in Section 3.7.2, "Backing Up an XFS File System" . Procedure 3.2. Restoring an XFS File System from Backup The command to restore the backup varies depending on whether you are restoring from a full backup or an incremental one, or are restoring multiple backups from a single tape device: Replace backup-location with the location of the backup. This can be a regular file, a tape drive, or a remote tape device. For example, /backup-files/Data.xfsdump for a file or /dev/st0 for a tape drive. Replace restoration-path with the path to the directory where you want to restore the file system. For example, /mnt/data/ . To restore a file system from an incremental (level 1 to level 9) backup, add the -r option. To restore a backup from a tape device that contains multiple backups, specify the backup using the -S or -L options. The -S lets you choose a backup by its session ID, while the -L lets you choose by the session label. To obtain the session ID and session labels, use the xfsrestore -I command. Replace session-id with the session ID of the backup. For example, b74a3586-e52e-4a4a-8775-c3334fa8ea2c . Replace session-label with the session label of the backup. For example, my_backup_session_label . To use xfsrestore interactively, use the -i option. The interactive dialog begins after xfsrestore finishes reading the specified device. Available commands in the interactive xfsrestore shell include cd , ls , add , delete , and extract ; for a complete list of commands, use the help command. Example 3.5. Restoring Multiple XFS File Systems To restore the XFS backup files and save their content into directories under /mnt/ : To restore from a tape device containing multiple backups, specify each backup by its session label or session ID: Informational Messages When Restoring a Backup from a Tape When restoring a backup from a tape with backups from multiple file systems, the xfsrestore utility might issue messages. The messages inform you whether a match of the requested backup has been found when xfsrestore examines each backup on the tape in sequential order. For example: The informational messages keep appearing until the matching backup is found. Additional Resources For more information about restoring XFS file systems, see the xfsrestore (8) man page. | [
"xfsdump -l level [ -L label ] -f backup-destination path-to-xfs-filesystem",
"xfsdump -l 0 -f /backup-files/boot.xfsdump /boot # xfsdump -l 0 -f /backup-files/data.xfsdump /data",
"xfsdump -l 0 -L \"backup_boot\" -f /dev/ st0 /boot # xfsdump -l 0 -L \"backup_data\" -f /dev/ st0 /data",
"xfsrestore [ -r ] [ -S session-id ] [ -L session-label ] [ -i ] -f backup-location restoration-path",
"xfsrestore -f /backup-files/boot.xfsdump /mnt/boot/ # xfsrestore -f /backup-files/data.xfsdump /mnt/data/",
"xfsrestore -f /dev/st0 -L \"backup_boot\" /mnt/boot/ # xfsrestore -f /dev/st0 -S \"45e9af35-efd2-4244-87bc-4762e476cbab\" /mnt/data/",
"xfsrestore: preparing drive xfsrestore: examining media file 0 xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a) does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-c50467912408) xfsrestore: examining media file 1 xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a) does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-c50467912408) [...]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/xfsbackuprestore |
Chapter 4. Identity Brokering APIs | Chapter 4. Identity Brokering APIs Red Hat build of Keycloak can delegate authentication to a parent IDP for login. A typical example of this is the case where you want users to be able to log in through a social provider such as Facebook or Google. You can also link existing accounts to a brokered IDP. This section describes some APIs that your applications can use as it pertains to identity brokering. 4.1. Retrieving external IDP tokens Red Hat build of Keycloak allows you to store tokens and responses from the authentication process with the external IDP. For that, you can use the Store Token configuration option on the IDP's settings page. Application code can retrieve these tokens and responses to pull in extra user information, or to securely invoke requests on the external IDP. For example, an application might want to use the Google token to invoke on other Google services and REST APIs. To retrieve a token for a particular identity provider you need to send a request as follows: An application must have authenticated with Red Hat build of Keycloak and have received an access token. This access token will need to have the broker client-level role read-token set. This means that the user must have a role mapping for this role and the client application must have that role within its scope. In this case, given that you are accessing a protected service in Red Hat build of Keycloak, you need to send the access token issued by Red Hat build of Keycloak during the user authentication. In the broker configuration page you can automatically assign this role to newly imported users by turning on the Stored Tokens Readable switch. These external tokens can be re-established by either logging in again through the provider, or using the client initiated account linking API. 4.2. Client initiated account linking Some applications want to integrate with social providers like Facebook, but do not want to provide an option to login via these social providers. Red Hat build of Keycloak offers a browser-based API that applications can use to link an existing user account to a specific external IDP. This is called client-initiated account linking. Account linking can only be initiated by OIDC applications. The way it works is that the application forwards the user's browser to a URL on the Red Hat build of Keycloak server requesting that it wants to link the user's account to a specific external provider (i.e. Facebook). The server initiates a login with the external provider. The browser logs in at the external provider and is redirected back to the server. The server establishes the link and redirects back to the application with a confirmation. There are some preconditions that must be met by the client application before it can initiate this protocol: The desired identity provider must be configured and enabled for the user's realm in the admin console. The user account must already be logged in as an existing user via the OIDC protocol The user must have an account.manage-account or account.manage-account-links role mapping. The application must be granted the scope for those roles within its access token The application must have access to its access token as it needs information within it to generate the redirect URL. To initiate the login, the application must fabricate a URL and redirect the user's browser to this URL. The URL looks like this: Here's a description of each path and query param: provider This is the provider alias of the external IDP that you defined in the Identity Provider section of the admin console. client_id This is the OIDC client id of your application. When you registered the application as a client in the admin console, you had to specify this client id. redirect_uri This is the application callback URL you want to redirect to after the account link is established. It must be a valid client redirect URI pattern. In other words, it must match one of the valid URL patterns you defined when you registered the client in the admin console. nonce This is a random string that your application must generate hash This is a Base64 URL encoded hash. This hash is generated by Base64 URL encoding a SHA_256 hash of nonce + token.getSessionState() + token.getIssuedFor() + provider . The token variable are obtained from the OIDC access token. Basically you are hashing the random nonce, the user session id, the client id, and the identity provider alias you want to access. Here's an example of Java Servlet code that generates the URL to establish the account link. KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName()); AccessToken token = session.getToken(); String clientId = token.getIssuedFor(); String nonce = UUID.randomUUID().toString(); MessageDigest md = null; try { md = MessageDigest.getInstance("SHA-256"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } String input = nonce + token.getSessionState() + clientId + provider; byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8)); String hash = Base64Url.encode(check); request.getSession().setAttribute("hash", hash); String redirectUri = ...; String accountLinkUrl = KeycloakUriBuilder.fromUri(authServerRootUrl) .path("/realms/{realm}/broker/{provider}/link") .queryParam("nonce", nonce) .queryParam("hash", hash) .queryParam("client_id", clientId) .queryParam("redirect_uri", redirectUri).build(realm, provider).toString(); Why is this hash included? We do this so that the auth server is guaranteed to know that the client application initiated the request and no other rogue app just randomly asked for a user account to be linked to a specific provider. The auth server will first check to see if the user is logged in by checking the SSO cookie set at login. It will then try to regenerate the hash based on the current login and match it up to the hash sent by the application. After the account has been linked, the auth server will redirect back to the redirect_uri . If there is a problem servicing the link request, the auth server may or may not redirect back to the redirect_uri . The browser may just end up at an error page instead of being redirected back to the application. If there is an error condition and the auth server deems it safe enough to redirect back to the client app, an additional error query parameter will be appended to the redirect_uri . Warning While this API guarantees that the application initiated the request, it does not completely prevent CSRF attacks for this operation. The application is still responsible for guarding against CSRF attacks target at itself. 4.2.1. Refreshing external tokens If you are using the external token generated by logging into the provider (i.e. a Facebook or GitHub token), you can refresh this token by re-initiating the account linking API. | [
"GET /realms/{realm}/broker/{provider_alias}/token HTTP/1.1 Host: localhost:8080 Authorization: Bearer <KEYCLOAK ACCESS TOKEN>",
"/{auth-server-root}/realms/{realm}/broker/{provider}/link?client_id={id}&redirect_uri={uri}&nonce={nonce}&hash={hash}",
"KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName()); AccessToken token = session.getToken(); String clientId = token.getIssuedFor(); String nonce = UUID.randomUUID().toString(); MessageDigest md = null; try { md = MessageDigest.getInstance(\"SHA-256\"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } String input = nonce + token.getSessionState() + clientId + provider; byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8)); String hash = Base64Url.encode(check); request.getSession().setAttribute(\"hash\", hash); String redirectUri = ...; String accountLinkUrl = KeycloakUriBuilder.fromUri(authServerRootUrl) .path(\"/realms/{realm}/broker/{provider}/link\") .queryParam(\"nonce\", nonce) .queryParam(\"hash\", hash) .queryParam(\"client_id\", clientId) .queryParam(\"redirect_uri\", redirectUri).build(realm, provider).toString();"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_developer_guide/identity_brokering_apis |
Chapter 19. Troubleshooting installation issues | Chapter 19. Troubleshooting installation issues To assist in troubleshooting a failed OpenShift Container Platform installation, you can gather logs from the bootstrap and control plane, or master, machines. You can also get debug information from the installation program. 19.1. Prerequisites You attempted to install an OpenShift Container Platform cluster, and installation failed. 19.2. Gathering logs from a failed installation If you gave an SSH key to your installation program, you can gather data about your failed installation. Note You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather command. Prerequisites Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH. The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program. If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes (also known as the master nodes). Procedure Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines: If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses. If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> \ 1 --bootstrap <bootstrap_address> \ 2 --master <master_1_address> \ 3 --master <master_2_address> \ 4 --master <master_3_address>" 5 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. 2 <bootstrap_address> is the fully qualified domain name or IP address of the cluster's bootstrap machine. 3 4 5 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address. Note A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses. Example output INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz" If you open a Red Hat support case about your installation failure, include the compressed logs in the case. 19.3. Manually gathering logs with SSH access to your host(s) Manually gather logs in situations where must-gather or automated collection methods do not work. Important By default, SSH access to the OpenShift Container Platform nodes is disabled on the Red Hat OpenStack Platform (RHOSP) based installations. Prerequisites You must have SSH access to your host(s). Procedure Collect the bootkube.service service logs from the bootstrap host using the journalctl command by running: USD journalctl -b -f -u bootkube.service Collect the bootstrap host's container logs using the podman logs. This is shown as a loop to get all of the container logs from the host: USD for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done Alternatively, collect the host's container logs using the tail command by running: # tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log Collect the kubelet.service and crio.service service logs from the master and worker hosts using the journalctl command by running: USD journalctl -b -f -u kubelet.service -u crio.service Collect the master and worker host container logs using the tail command by running: USD sudo tail -f /var/log/containers/* 19.4. Manually gathering logs without SSH access to your host(s) Manually gather logs in situations where must-gather or automated collection methods do not work. If you do not have SSH access to your node, you can access the systems journal to investigate what is happening on your host. Prerequisites Your OpenShift Container Platform installation must be complete. Your API service is still functional. You have system administrator privileges. Procedure Access journald unit logs under /var/log by running: USD oc adm node-logs --role=master -u kubelet Access host file paths under /var/log by running: USD oc adm node-logs --role=master --path=openshift-apiserver 19.5. Getting debug information from the installation program You can use any of the following actions to get debug information from the installation program. Look at debug messages from a past installation in the hidden .openshift_install.log file. For example, enter: USD cat ~/<installation_directory>/.openshift_install.log 1 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . Change to the directory that contains the installation program and re-run it with --log-level=debug : USD ./openshift-install create cluster --dir <installation_directory> --log-level debug 1 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . 19.6. Reinstalling the OpenShift Container Platform cluster If you are unable to debug and resolve issues in the failed OpenShift Container Platform installation, consider installing a new OpenShift Container Platform cluster. Before starting the installation process again, you must complete thorough cleanup. For a user-provisioned infrastructure (UPI) installation, you must manually destroy the cluster and delete all associated resources. The following procedure is for an installer-provisioned infrastructure (IPI) installation. Procedure Destroy the cluster and remove all the resources associated with the cluster, including the hidden installer state files in the installation directory: USD ./openshift-install destroy cluster --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. Before reinstalling the cluster, delete the installation directory: USD rm -rf <installation_directory> Follow the procedure for installing a new OpenShift Container Platform cluster. Additional resources Installing an OpenShift Container Platform cluster | [
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address>\" 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"journalctl -b -f -u bootkube.service",
"for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done",
"tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log",
"journalctl -b -f -u kubelet.service -u crio.service",
"sudo tail -f /var/log/containers/*",
"oc adm node-logs --role=master -u kubelet",
"oc adm node-logs --role=master --path=openshift-apiserver",
"cat ~/<installation_directory>/.openshift_install.log 1",
"./openshift-install create cluster --dir <installation_directory> --log-level debug 1",
"./openshift-install destroy cluster --dir <installation_directory> 1",
"rm -rf <installation_directory>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-troubleshooting |
Chapter 8. Initiating overcloud deployment | Chapter 8. Initiating overcloud deployment Deploy the overcloud after completing the initial configuration and customization of services. 8.1. Initiating overcloud deployment Deploy the overcloud to implement the configuration of the Red Hat OpenStack Platform (RHOSP) environment. Prerequisites During undercloud installation, set generate_service_certificate=false in the undercloud.conf file. Otherwise, you must inject a trust anchor when you deploy the overcloud. Note If you want to add Ceph Dashboard during your overcloud deployment, see Chapter 10, Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment . Procedure Deploy the overcloud using the openstack overcloud deploy command. For a complete list of all command arguments, see openstack overcloud deploy in the Command line interface reference . The following is an example usage of the command: The example command uses the following options: --templates Creates the overcloud from the default heat template collection, /usr/share/openstack-tripleo-heat-templates/ . -r /home/stack/templates/roles_data_custom.yaml Specifies a customized roles definition file. -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml Sets the director to finalize the previously deployed Ceph Storage cluster. This environment file deploys RGW by default. It also creates pools, keys, and daemons. If you do not want to deploy RGW or object storage, see the options described in Section 5.5, "Deployment options for Red Hat OpenStack Platform object storage" -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml Enables the Ceph Metadata Server, as described in Section 5.3, "Enabling Ceph Metadata Server" . -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml Enables the Block Storage Backup service ( cinder-backup ), as described in Section 5.6, "Configuring the Block Storage Backup Service to use Ceph" . -e /home/stack/templates/storage-config.yaml Adds the environment file that contains your custom Ceph Storage configuration as described in Section 5.1, "Configuring a custom environment file" -e /home/stack/templates/deployed-ceph.yaml Adds the environment file that contains your Ceph cluster settings, as output by the openstack overcloud ceph deploy command run earlier. -e /home/stack/templates/networks-deployed.yaml Adds the environment file that contains your Ceph cluster network settings, as output by openstack overcloud network provision . -e /home/stack/templates/deployed-metal.yaml Adds the environment file that contains your Ceph cluster node settings, as output by openstack overcloud node provision . -e /home/stack/templates/deployed-vips.yaml Adds the environment file that contains your Ceph cluster network VIP settings, as output by openstack overcloud network vip provision . --ntp-server pool.ntp.org Sets the NTP server. | [
"openstack overcloud deploy --templates -r /home/stack/templates/roles_data_custom.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml -e /home/stack/templates/storage-config.yaml -e /home/stack/templates/deployed-ceph.yaml -e /home/stack/templates/networks-deployed.yaml -e /home/stack/templates/deployed-metal.yaml -e /home/stack/templates/deployed-vips.yaml --ntp-server pool.ntp.org"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_initiating-overcloud-deployment_deployingcontainerizedrhcs |
Getting Started with AMQ Broker | Getting Started with AMQ Broker Red Hat AMQ 2021.Q3 For Use with AMQ Broker 7.9 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/getting_started_with_amq_broker/index |
Preface | Preface Red Hat Enterprise Linux (RHEL) minor releases are an aggregation of individual security, enhancement, and bug fix errata. The Red Hat Enterprise Linux 7.7 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.7_release_notes/preface |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.