title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 11. Using service accounts in applications | Chapter 11. Using service accounts in applications 11.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 11.2. Default service accounts Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project. 11.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service Account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 11.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service Account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any imagestream in the project using the internal container image registry. 11.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44 11.4. Using a service account's credentials externally You can distribute a service account's token to external applications that must authenticate to the API. To pull an image, the authenticated user must have get rights on the requested imagestreams/layers . To push an image, the authenticated user must have update rights on the requested imagestreams/layers . By default, all service accounts in a project have rights to pull any image in the same project, and the builder service account has rights to push any image in the same project. Procedure View the service account's API token: USD oc describe secret <secret_name> For example: USD oc describe secret robot-token-uzkbh -n top-secret Example output Name: robot-token-uzkbh Labels: <none> Annotations: kubernetes.io/service-account.name=robot,kubernetes.io/service-account.uid=49f19e2e-16c6-11e5-afdc-3c970e4b7ffe Type: kubernetes.io/service-account-token Data token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9... Log in using the token that you obtained: USD oc login --token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9... Example output Logged into "https://server:8443" as "system:serviceaccount:top-secret:robot" using the token provided. You don't have any projects. You can try to create a new project, by running USD oc new-project <projectname> Confirm that you logged in as the service account: USD oc whoami Example output system:serviceaccount:top-secret:robot | [
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44",
"oc describe secret <secret_name>",
"oc describe secret robot-token-uzkbh -n top-secret",
"Name: robot-token-uzkbh Labels: <none> Annotations: kubernetes.io/service-account.name=robot,kubernetes.io/service-account.uid=49f19e2e-16c6-11e5-afdc-3c970e4b7ffe Type: kubernetes.io/service-account-token Data token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9",
"oc login --token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9",
"Logged into \"https://server:8443\" as \"system:serviceaccount:top-secret:robot\" using the token provided. You don't have any projects. You can try to create a new project, by running USD oc new-project <projectname>",
"oc whoami",
"system:serviceaccount:top-secret:robot"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/using-service-accounts |
Chapter 51. Enabling authentication using AD User Principal Names in IdM | Chapter 51. Enabling authentication using AD User Principal Names in IdM 51.1. User principal names in an AD forest trusted by IdM As an Identity Management (IdM) administrator, you can allow AD users to use alternative User Principal Names (UPNs) to access resources in the IdM domain. A UPN is an alternative user login that AD users authenticate with in the format of user_name@KERBEROS-REALM . As an AD administrator, you can set alternative values for both user_name and KERBEROS-REALM , since you can configure both additional Kerberos aliases and UPN suffixes in an AD forest. For example, if a company uses the Kerberos realm AD.EXAMPLE.COM , the default UPN for a user is [email protected] . To allow your users to log in using their email addresses, for example user@ example.com , you can configure EXAMPLE.COM as an alternative UPN in AD. Alternative UPNs (also known as enterprise UPNs ) are especially convenient if your company has recently experienced a merge and you want to provide your users with a unified logon namespace. UPN suffixes are only visible for IdM when defined in the AD forest root. As an AD administrator, you can define UPNs with the Active Directory Domain and Trust utility or the PowerShell command line tool. Note To configure UPN suffixes for users, Red Hat recommends to use tools that perform error validation, such as the Active Directory Domain and Trust utility. Red Hat recommends against configuring UPNs through low-level modifications, such as using ldapmodify commands to set the userPrincipalName attribute for users, because Active Directory does not validate those operations. After you define a new UPN on the AD side, run the ipa trust-fetch-domains command on an IdM server to retrieve the updated UPNs. See Ensuring that AD UPNs are up-to-date in IdM . IdM stores the UPN suffixes for a domain in the multi-value attribute ipaNTAdditionalSuffixes of the subtree cn=trusted_domain_name,cn=ad,cn=trusts,dc=idm,dc=example,dc=com . Additional resources How to script UPN suffix setup in AD forest root How to manually modify AD user entries and bypass any UPN suffix validation Trust controllers and trust agents 51.2. Ensuring that AD UPNs are up-to-date in IdM After you add or remove a User Principal Name (UPN) suffix in a trusted Active Directory (AD) forest, refresh the information for the trusted forest on an IdM server. Prerequisites IdM administrator credentials. Procedure Enter the ipa trust-fetch-domains command. Note that a seemingly empty output is expected: Verification Enter the ipa trust-show command to verify that the server has fetched the new UPN. Specify the name of the AD realm when prompted: The output shows that the example.com UPN suffix is now part of the ad.example.com realm entry. 51.3. Gathering troubleshooting data for AD UPN authentication issues Follow this procedure to gather troubleshooting data about the User Principal Name (UPN) configuration from your Active Directory (AD) environment and your IdM environment. If your AD users are unable to log in using alternate UPNs, you can use this information to narrow your troubleshooting efforts. Prerequisites You must be logged in to an IdM Trust Controller or Trust Agent to retrieve information from an AD domain controller. You need root permissions to modify the following configuration files, and to restart IdM services. Procedure Open the /usr/share/ipa/smb.conf.empty configuration file in a text editor. Add the following contents to the file. Save and close the /usr/share/ipa/smb.conf.empty file. Open the /etc/ipa/server.conf configuration file in a text editor. If you do not have that file, create one. Add the following contents to the file. Save and close the /etc/ipa/server.conf file. Restart the Apache webserver service to apply the configuration changes: Retrieve trust information from your AD domain: Review the debugging output and troubleshooting information in the following log files: /var/log/httpd/error_log /var/log/samba/log.* Additional resources Using rpcclient to gather troubleshooting data for AD UPN authentication issues (Red Hat Knowledgebase) | [
"ipa trust-fetch-domains Realm-Name: ad.example.com ------------------------------- No new trust domains were found ------------------------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa trust-show Realm-Name: ad.example.com Realm-Name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 Trust direction: One-way trust Trust type: Active Directory domain UPN suffixes: example.com",
"[global] log level = 10",
"[global] debug = True",
"systemctl restart httpd",
"ipa trust-fetch-domains <ad.example.com>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/enabling-authentication-using-ad-user-principal-names-in-idm_managing-users-groups-hosts |
Appendix A. Using an NFS share for content storage | Appendix A. Using an NFS share for content storage Your environment requires adequate hard disk space to fulfill content storage. In some situations, it is useful to use an NFS share to store this content. This appendix shows how to mount the NFS share on your Satellite Server's content management component. Important Use high-bandwidth, low-latency storage for the /var/lib/pulp file system. Red Hat Satellite has many I/O-intensive operations; therefore, high-latency, low-bandwidth storage might have issues with performance degradation. Procedure Create the NFS share. This example uses a share at nfs.example.com:/Satellite/pulp . Ensure this share provides the appropriate permissions to Satellite Server and its apache user. Stop Satellite services on your Satellite Server: Ensure Satellite Server has the nfs-utils package installed: You need to copy the existing contents of /var/lib/pulp to the NFS share. First, mount the NFS share to a temporary location: Copy the existing contents of /var/lib/pulp to the temporary location: Set the permissions for all files on the share to use the pulp user. Unmount the temporary storage location: Remove the existing contents of /var/lib/pulp : Edit the /etc/fstab file and add the following line: This makes the mount persistent across system reboots. Ensure to include the SELinux context. Enable the mount: Confirm the NFS share mounts to var/lib/pulp : Also confirm that the existing content exists at the mount on var/lib/pulp : Start Satellite services on your Satellite Server: Satellite Server now uses the NFS share to store content. Run a content synchronization to ensure the NFS share works as expected. For more information, see Section 4.7, "Synchronizing repositories" . | [
"satellite-maintain service stop",
"satellite-maintain packages install nfs-utils",
"mkdir /mnt/temp mount -o rw nfs.example.com:/Satellite/pulp /mnt/temp",
"cp -r /var/lib/pulp/* /mnt/temp/.",
"umount /mnt/temp",
"rm -rf /var/lib/pulp/*",
"nfs.example.com:/Satellite/pulp /var/lib/pulp nfs rw,hard,intr,context=\"system_u:object_r:pulpcore_var_lib_t:s0\"",
"mount -a",
"df Filesystem 1K-blocks Used Available Use% Mounted on nfs.example.com:/Satellite/pulp 309506048 58632800 235128224 20% /var/lib/pulp",
"ls /var/lib/pulp",
"satellite-maintain service start"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/using_an_nfs_share_for_content_storage_content-management |
E.2.13. /proc/kcore | E.2.13. /proc/kcore This file represents the physical memory of the system and is stored in the core file format. Unlike most /proc/ files, kcore displays a size. This value is given in bytes and is equal to the size of the physical memory (RAM) used plus 4 KB. The contents of this file are designed to be examined by a debugger, such as gdb , and is not human readable. Warning Do not view the /proc/kcore virtual file. The contents of the file scramble text output on the terminal. If this file is accidentally viewed, press Ctrl + C to stop the process and then type reset to bring back the command line prompt. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-kcore |
5.5.4. Do Not Remove the IncludesNoExec Directive | 5.5.4. Do Not Remove the IncludesNoExec Directive By default, the server-side includes module cannot execute commands. It is ill advised to change this setting unless absolutely necessary, as it could potentially enable an attacker to execute commands on the system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-server-http-ssi |
Chapter 2. access | Chapter 2. access This chapter describes the commands under the access command. 2.1. access rule delete Delete access rule(s) Usage: Table 2.1. Positional arguments Value Summary <access-rule> Access rule(s) to delete (name or id) Table 2.2. Command arguments Value Summary -h, --help Show this help message and exit 2.2. access rule list List access rules Usage: Table 2.3. Command arguments Value Summary -h, --help Show this help message and exit --user <user> User whose access rules to list (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 2.4. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 2.5. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 2.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.7. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 2.3. access rule show Display access rule details Usage: Table 2.8. Positional arguments Value Summary <access-rule> Access rule to display (name or id) Table 2.9. Command arguments Value Summary -h, --help Show this help message and exit Table 2.10. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 2.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.12. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 2.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 2.4. access token create Create an access token Usage: Table 2.14. Command arguments Value Summary -h, --help Show this help message and exit --consumer-key <consumer-key> Consumer key (required) --consumer-secret <consumer-secret> Consumer secret (required) --request-key <request-key> Request token to exchange for access token (required) --request-secret <request-secret> Secret associated with <request-key> (required) --verifier <verifier> Verifier associated with <request-key> (required) Table 2.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 2.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 2.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack access rule delete [-h] <access-rule> [<access-rule> ...]",
"openstack access rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>]",
"openstack access rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <access-rule>",
"openstack access token create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --consumer-key <consumer-key> --consumer-secret <consumer-secret> --request-key <request-key> --request-secret <request-secret> --verifier <verifier>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/access |
Appendix A. Component Versions | Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7.7 release. Table A.1. Component Versions Component Version kernel 3.10.0-1062 kernel-alt 4.14.0-115 QLogic qla2xxx driver 10.00.00.12.07.7-k QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:12.0.0.10 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.874-11 DM-Multipath ( device-mapper-multipath ) 0.4.9-127 LVM ( lvm2 ) 2.02.185-2 qemu-kvm [a] 1.5.3-167 qemu-kvm-ma [b] 2.12.0-18 [a] The qemu-kvm packages provide KVM virtualization on AMD64 and Intel 64 systems. [b] The qemu-kvm-ma packages provide KVM virtualization on IBM POWER8, IBM POWER9, and IBM Z. Note that KVM virtualization on IBM POWER9 and IBM Z also requires using the kernel-alt packages. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.7_release_notes/component_versions |
8.119. librtas | 8.119. librtas 8.119.1. RHBA-2014:1427 - librtas bug fix and enhancement update Updated librtas packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The librtas packages contain a set of libraries that allow access to the Run-Time Abstraction Services (RTAS) on 64-bit PowerPC architectures. The librtasevent library contains definitions and routines for analyzing RTAS events. Note The librtas packages have been upgraded to upstream version 1.3.10, which provides a number of bug fixes and enhancements over the version. Notably, this update adds RTAS support for PCI hot plug support on POWERPC 8 systems and improves support for RTAS Event parsing on little-endian POWERPC systems. In addition, the librtasevent library is now able to analyze hot plug events. (BZ# 1073037 ) Users of librtas are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/pckg-librtas |
Chapter 11. DNSRecord [ingress.operator.openshift.io/v1] | Chapter 11. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the dnsRecord. status object status is the most recently observed status of the dnsRecord. 11.1.1. .spec Description spec is the specification of the desired behavior of the dnsRecord. Type object Required dnsManagementPolicy dnsName recordTTL recordType targets Property Type Description dnsManagementPolicy string dnsManagementPolicy denotes the current policy applied on the DNS record. Records that have policy set as "Unmanaged" are ignored by the ingress operator. This means that the DNS record on the cloud provider is not managed by the operator, and the "Published" status condition will be updated to "Unknown" status, since it is externally managed. Any existing record on the cloud provider can be deleted at the discretion of the cluster admin. This field defaults to Managed. Valid values are "Managed" and "Unmanaged". dnsName string dnsName is the hostname of the DNS record recordTTL integer recordTTL is the record TTL in seconds. If zero, the default is 30. RecordTTL will not be used in AWS regions Alias targets, but will be used in CNAME targets, per AWS API contract. recordType string recordType is the DNS record type. For example, "A" or "CNAME". targets array (string) targets are record targets. 11.1.2. .status Description status is the most recently observed status of the dnsRecord. Type object Property Type Description observedGeneration integer observedGeneration is the most recently observed generation of the DNSRecord. When the DNSRecord is updated, the controller updates the corresponding record in each managed zone. If an update for a particular zone fails, that failure is recorded in the status condition for the zone so that the controller can determine that it needs to retry the update for that specific zone. zones array zones are the status of the record in each zone. zones[] object DNSZoneStatus is the status of a record within a specific zone. 11.1.3. .status.zones Description zones are the status of the record in each zone. Type array 11.1.4. .status.zones[] Description DNSZoneStatus is the status of a record within a specific zone. Type object Property Type Description conditions array conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. conditions[] object DNSZoneCondition is just the standard condition fields. dnsZone object dnsZone is the zone where the record is published. 11.1.5. .status.zones[].conditions Description conditions are any conditions associated with the record in the zone. If publishing the record succeeds, the "Published" condition will be set with status "True" and upon failure it will be set to "False" along with the reason and message describing the cause of the failure. Type array 11.1.6. .status.zones[].conditions[] Description DNSZoneCondition is just the standard condition fields. Type object Required status type Property Type Description lastTransitionTime string message string reason string status string type string 11.1.7. .status.zones[].dnsZone Description dnsZone is the zone where the record is published. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 11.2. API endpoints The following API endpoints are available: /apis/ingress.operator.openshift.io/v1/dnsrecords GET : list objects of kind DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords DELETE : delete collection of DNSRecord GET : list objects of kind DNSRecord POST : create a DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} DELETE : delete a DNSRecord GET : read the specified DNSRecord PATCH : partially update the specified DNSRecord PUT : replace the specified DNSRecord /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status GET : read status of the specified DNSRecord PATCH : partially update status of the specified DNSRecord PUT : replace status of the specified DNSRecord 11.2.1. /apis/ingress.operator.openshift.io/v1/dnsrecords HTTP method GET Description list objects of kind DNSRecord Table 11.1. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty 11.2.2. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords HTTP method DELETE Description delete collection of DNSRecord Table 11.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNSRecord Table 11.3. HTTP responses HTTP code Reponse body 200 - OK DNSRecordList schema 401 - Unauthorized Empty HTTP method POST Description create a DNSRecord Table 11.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.5. Body parameters Parameter Type Description body DNSRecord schema Table 11.6. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 202 - Accepted DNSRecord schema 401 - Unauthorized Empty 11.2.3. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name} Table 11.7. Global path parameters Parameter Type Description name string name of the DNSRecord HTTP method DELETE Description delete a DNSRecord Table 11.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNSRecord Table 11.10. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNSRecord Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.12. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNSRecord Table 11.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.14. Body parameters Parameter Type Description body DNSRecord schema Table 11.15. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty 11.2.4. /apis/ingress.operator.openshift.io/v1/namespaces/{namespace}/dnsrecords/{name}/status Table 11.16. Global path parameters Parameter Type Description name string name of the DNSRecord HTTP method GET Description read status of the specified DNSRecord Table 11.17. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNSRecord Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.19. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNSRecord Table 11.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.21. Body parameters Parameter Type Description body DNSRecord schema Table 11.22. HTTP responses HTTP code Reponse body 200 - OK DNSRecord schema 201 - Created DNSRecord schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/dnsrecord-ingress-operator-openshift-io-v1 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/getting_started_with_red_hat_build_of_openjdk_8/making-open-source-more-inclusive |
Chapter 8. OpenStack Cloud Controller Manager reference guide | Chapter 8. OpenStack Cloud Controller Manager reference guide 8.1. The OpenStack Cloud Controller Manager Beginning with OpenShift Container Platform 4.12, clusters that run on Red Hat OpenStack Platform (RHOSP) were switched from the legacy OpenStack cloud provider to the external OpenStack Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the Cloud Controller Manager . To preserve user-defined configurations for the legacy cloud provider, existing configurations are mapped to new ones as part of the migration process. It searches for a configuration called cloud-provider-config in the openshift-config namespace. Note The config map name cloud-provider-config is not statically configured. It is derived from the spec.cloudConfig.name value in the infrastructure/cluster CRD. Found configurations are synchronized to the cloud-conf config map in the openshift-cloud-controller-manager namespace. As part of this synchronization, the OpenStack CCM Operator alters the new config map such that its properties are compatible with the external cloud provider. The file is changed in the following ways: The [Global] secret-name , [Global] secret-namespace , and [Global] kubeconfig-path options are removed. They do not apply to the external cloud provider. The [Global] use-clouds , [Global] clouds-file , and [Global] cloud options are added. The entire [BlockStorage] section is removed. External cloud providers no longer perform storage operations. Block storage configuration is managed by the Cinder CSI driver. Additionally, the CCM Operator enforces a number of default options. Values for these options are always overriden as follows: [Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack ... [LoadBalancer] enabled = true The clouds-value value, /etc/openstack/secret/clouds.yaml , is mapped to the openstack-cloud-credentials config in the openshift-cloud-controller-manager namespace. You can modify the RHOSP cloud in this file as you do any other clouds.yaml file. 8.2. The OpenStack Cloud Controller Manager (CCM) config map An OpenStack CCM config map defines how your cluster interacts with your RHOSP cloud. By default, this configuration is stored under the cloud.conf key in the cloud-conf config map in the openshift-cloud-controller-manager namespace. Important The cloud-conf config map is generated from the cloud-provider-config config map in the openshift-config namespace. To change the settings that are described by the cloud-conf config map, modify the cloud-provider-config config map. As part of this synchronization, the CCM Operator overrides some options. For more information, see "The RHOSP Cloud Controller Manager". For example: An example cloud-conf config map apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: "2022-12-20T17:01:08Z" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: "2519" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677 1 Set global options by using a clouds.yaml file rather than modifying the config map. The following options are present in the config map. Except when indicated otherwise, they are mandatory for clusters that run on RHOSP. 8.2.1. Load balancer options CCM supports several load balancer options for deployments that use Octavia. Note Neutron-LBaaS support is deprecated. Option Description enabled Whether or not to enable the LoadBalancer type of services integration. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. floating-subnet-id Optional. The external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-id . floating-subnet Optional. A name pattern (glob or regular expression if starting with ~ ) for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet . If multiple subnets match the pattern, the first one with available IP addresses is used. floating-subnet-tags Optional. Tags for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-tags . If multiple subnets match these tags, the first one with available IP addresses is used. If the RHOSP network is configured with sharing disabled, for example, with the --no-share flag used during creation, this option is unsupported. Set the network to share to use this option. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. For dual stack deployments, leave this option unset. The OpenStack cloud provider automatically selects which subnet to use for a load balancer. network-id The ID of the Networking service network on which load balancer VIPs are created. Unnecessary if subnet-id is set. If this property is not set, the network is automatically selected based on the network that cluster nodes use. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . internal-lb Whether or not to create an internal load balancer without floating IP addresses. The default value is false . LoadBalancerClass "ClassName" This is a config section that comprises a set of options: floating-network-id floating-subnet-id floating-subnet floating-subnet-tags network-id subnet-id The behavior of these options is the same as that of the identically named options in the load balancer section of the CCM config file. You can set the ClassName value by specifying the service annotation loadbalancer.openstack.org/class . max-shared-lb The maximum number of services that can share a load balancer. The default value is 2 . 8.2.2. Options that the Operator overrides The CCM Operator overrides the following options, which you might recognize from configuring RHOSP. Do not configure them yourself. They are included in this document for informational purposes only. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . os-endpoint-type The type of endpoint to use from the service catalog. username The Identity service user name. password The Identity service user password. domain-id The Identity service user domain ID. domain-name The Identity service user domain name. tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. tenant-domain-id The Identity service project domain ID. tenant-domain-name The Identity service project domain name. user-domain-id The Identity service user domain ID. user-domain-name The Identity service user domain name. use-clouds Whether or not to fetch authorization credentials from a clouds.yaml file. Options set in this section are prioritized over values read from the clouds.yaml file. CCM searches for the file in the following places: The value of the clouds-file option. A file path stored in the environment variable OS_CLIENT_CONFIG_FILE . The directory pkg/openstack . The directory ~/.config/openstack . The directory /etc/openstack . clouds-file The file path of a clouds.yaml file. It is used if the use-clouds option is set to true . cloud The named cloud in the clouds.yaml file that you want to use. It is used if the use-clouds option is set to true . | [
"[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] enabled = true",
"apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/installing-openstack-cloud-config-reference |
1.2. File Locations | 1.2. File Locations See the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/ds_file_locations |
7.7. Networked Versus Local Printers | 7.7. Networked Versus Local Printers Depending on organizational needs, it may be unnecessary to assign one printer to each member of your organization. Such overlap in expenditure can eat into allotted budgets, leaving less capital for other necessities. While local printers attached via a parallel or USB cable to every workstation are an ideal solution for the user, it is usually not economically feasible. Printer manufacturers have addressed this need by developing departmental (or workgroup) printers. These machines are usually durable, fast, and have long-life consumables. Workgroup printers usually are attached to a print server, a standalone device (such as a reconfigured workstation) that handles print jobs and routes output to the proper printer when available. More recent departmental printers include built-in or add-on network interfaces that eliminate the need for a dedicated print server. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-printers-types-netornot |
3.2. Standard Schema | 3.2. Standard Schema The directory schema maintains the integrity of the data stored in the directory by imposing constraints on the size, range, and format of data values. The schema reflects decisions about what types of entries the directory contains (like people, devices, and organizations) and the attributes available to each entry. The predefined schema included with Directory Server contains both the standard LDAP schema as well as additional application-specific schema to support the features of the server. While this schema meets most directory needs, new object classes and attributes can be added to the schema ( extending the schema) to accommodate the unique needs of the directory. See Section 3.4, "Customizing the Schema" for information on extending the schema. 3.2.1. Schema Format Directory Server bases its schema format on version 3 of the LDAP protocol. This protocol requires directory servers to publish their schema through LDAP itself, allowing directory client applications to retrieve the schema programmatically and adapt their behavior accordingly. The global set of schema for Directory Server can be found in the cn=schema entry. The Directory Server schema differs slightly from the LDAPv3 schema, because it uses its own proprietary object classes and attributes. In addition, it uses a private field in the schema entries, called X-ORIGIN , which describes where the schema entry was defined originally. For example, if a schema entry is defined in the standard LDAPv3 schema, the X-ORIGIN field refers to RFC 2252. If the entry is defined by Red Hat for the Directory Server's use, the X-ORIGIN field contains the value Netscape Directory Server . For example, the standard person object class appears in the schema as follows: This schema entry states the object identifier, or OID , for the class ( 2.5.6.6 ), the name of the object class ( person ), a description of the class ( Standard Person ), and then lists the required attributes ( objectclass , sn , and cn ) and the allowed attributes ( description , seeAlso , telephoneNumber , and userPassword ). For more information about the LDAPv3 schema format, see the LDAPv3 Attribute Syntax Definitions document, RFC 2252, and other standard schema definitions in RFC 247, RFC 2927, and RFC 2307. All of these schema elements are supported in Red Hat Directory Server. 3.2.2. Standard Attributes Attributes contain specific data elements such as a name or a fax number. Directory Server represents data as attribute-data pairs , a descriptive schema attribute associated with a specific piece of information. These are also called attribute-value assertions or AVAs. For example, the directory can store a piece of data such as a person's name in a pair with the standard attribute, in this case commonName ( cn ). So, an entry for a person named Babs Jensen has the attribute-data pair cn: Babs Jensen . In fact, the entire entry is represented as a series of attribute-data pairs. The entire entry for Babs Jensen is as follows: The entry for Babs Jensen contains multiple values for some of the attributes. The givenName attribute appears twice, each time with a unique value. In the schema, each attribute definition contains the following information: A unique name. An object identifier (OID) for the attribute. A text description of the attribute. The OID of the attribute syntax. Indications of whether the attribute is single-valued or multi-valued, whether the attribute is for the directory's own use, the origin of the attribute, and any additional matching rules associated with the attribute. For example, the cn attribute definition appears in the schema as follows: The attribute's syntax defines the format of the values which the attribute allows. In a way, the syntax helps define the kind of information that can be stored in the attribute. The Directory Server supports all standard attribute syntaxes. Supported LDAP attribute syntaxes are covered in section Directory Server Attribute Syntaxes of the Red Hat Directory Server 10 Configuration, Command, and File Reference . 3.2.3. Standard Object Classes Object classes are used to group related information. Typically, an object class represents a real object, such as a person or a fax machine. Before it is possible to use an object class and its attributes in the directory, it must be identified in the schema. The directory recognizes a standard list of object classes by default; these are listed and described in the Red Hat Directory Server Configuration, Command, and File Reference . Each directory entry belongs to at least one object classes. Placing an object class identified in the schema on an entry tells the Directory Server that the entry can have a certain set of possible attribute values and must have another, usually smaller, set of required attribute values. Object class definitions contain the following information: A unique name. An object identifier (OID) that names the object. A set of mandatory attributes. A set of allowed (or optional) attributes. For example, the standard person object class appears in the schema as follows: As is the case for all of the Directory Server's schema, object classes are defined and stored directly in Directory Server. This means that the directory's schema can be both queried and changed with standard LDAP operations. | [
"objectclasses: ( 2.5.6.6 NAME 'person' DESC 'Standard Person Object Class' SUP top MUST (objectclass USD sn USD cn) MAY (description USD seeAlso USD telephoneNumber USD userPassword) X-ORIGIN 'RFC 2252' )",
"dn: uid=bjensen,ou=people,dc=example,dc=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Babs Jensen sn: Jensen givenName: Babs givenName: Barbara mail: [email protected]",
"attributetypes: ( 2.5.4.3 NAME 'cn' DESC 'commonName Standard Attribute' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )",
"objectclasses: ( 2.5.6.6 NAME 'person' DESC 'Standard Person Object Class' SUP top MUST (objectclass USD sn USD cn) MAY (description USD seeAlso USD telephoneNumber USD userPassword) X-ORIGIN 'RFC 2252' )"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Directory_Schema-Standard_Schema |
Chapter 3. Getting your cluster ID | Chapter 3. Getting your cluster ID When providing information to Red Hat Support, it is helpful to provide the unique identifier of your cluster. For MicroShift, you can get your cluster ID manually by using the OpenShift CLI ( oc ) or by retrieving the ID from a file. Note A cluster ID is created only after the MicroShift service runs for the first time after installation. 3.1. Getting the cluster ID of a running cluster Use either the of the following steps to get the ID of a running cluster. Procedure Get the ID of a running cluster using oc get by entering the following command: USD oc get namespaces kube-system -o jsonpath={.metadata.uid} Example output 7cf13853-68f4-454e-8f5c-1af748cbfb1a Get the ID of a running cluster by retrieving it from the cluster-id file by entering the following command: USD sudo cat /var/lib/microshift/cluster-id Example output 7cf13853-68f4-454e-8f5c-1af748cbfb1a 3.2. Getting the cluster ID of a stopped cluster For a cluster that ran before, but is not running now, you can get the cluster ID from the cluster-id file in the /var/lib/microshift directory. Procedure Get the ID of a stopped cluster by retrieving it from the cluster-id file by entering the following command: USD sudo cat /var/lib/microshift/cluster-id Example output 7cf13853-68f4-454e-8f5c-1af748cbfb1a | [
"oc get namespaces kube-system -o jsonpath={.metadata.uid}",
"7cf13853-68f4-454e-8f5c-1af748cbfb1a",
"sudo cat /var/lib/microshift/cluster-id",
"7cf13853-68f4-454e-8f5c-1af748cbfb1a",
"sudo cat /var/lib/microshift/cluster-id",
"7cf13853-68f4-454e-8f5c-1af748cbfb1a"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/support/microshift-getting-cluster-id |
Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster | Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.14 is installed and running on the OpenShift Container Platform version 4.14 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using the following command: Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. | [
"get csv USD(oc get csv -n openshift-storage | grep ocs-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\\.features\\.ocs\\.openshift\\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix",
"caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/adding-file-and-object-storage-to-an-existing-external-ocs-cluster |
5.267. ql2500-firmware | 5.267. ql2500-firmware 5.267.1. RHBA-2012:0860 - ql2500-firmware bug fix update An updated ql2500-firmware package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The ql2500-firmware package provides the firmware required to run the QLogic 2500 Series of mass storage adapters. This update upgrades the ql2500 firmware to upstream version 5.06.05, which provides a number of bug fixes and enhancements over the version. (BZ# 766050 ) All users of QLogic 2500 Series Fibre Channel adapters are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/ql2500-firmware |
10.4. The quorum unblock Command | 10.4. The quorum unblock Command In a situation in which you know that the cluster is inquorate but you want the cluster to proceed with resource management, you can use the following command to prevent the cluster from waiting for all nodes when establishing quorum. Note This command should be used with extreme caution. Before issuing this command, it is imperative that you ensure that nodes that are not currently in the cluster are switched off and have no access to shared resources. | [
"pcs cluster quorum unblock"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-quorumunblock-haar |
Chapter 24. Runtime commands in Red Hat Decision Manager | Chapter 24. Runtime commands in Red Hat Decision Manager Red Hat Decision Manager supports runtime commands that you can send to KIE Server for asset-related operations, such as executing all rules or inserting or retracting objects in a KIE session. The full list of supported runtime commands is located in the org.drools.core.command.runtime package in your Red Hat Decision Manager instance. In the KIE Server REST API, you use the global org.drools.core.command.runtime commands or the rule-specific org.drools.core.command.runtime.rule commands as the request body for POST requests to http://SERVER:PORT/kie-server/services/rest/server/containers/instances/{containerId} . For more information about using the KIE Server REST API, see Chapter 21, KIE Server REST API for KIE containers and business assets . In the KIE Server Java client API, you can embed these commands in your Java application along with the relevant Java client. For example, for rule-related commands, you use the RuleServicesClient Java client with the embedded commands. For more information about using the KIE Server Java client API, see Chapter 22, KIE Server Java client API for KIE containers and business assets . 24.1. Sample runtime commands in Red Hat Decision Manager The following are sample runtime commands that you can use with the KIE Server REST API or Java client API for asset-related operations in KIE Server: BatchExecutionCommand InsertObjectCommand RetractCommand ModifyCommand GetObjectCommand GetObjectsCommand InsertElementsCommand FireAllRulesCommand QueryCommand SetGlobalCommand GetGlobalCommand For the full list of supported runtime commands, see the org.drools.core.command.runtime package in your Red Hat Decision Manager instance. Each command in this section includes a REST request body example (JSON) for the KIE Server REST API and an embedded Java command example for the KIE Server Java client API. The Java examples use an object org.drools.compiler.test.Person with the fields name (String) and age (Integer). BatchExecutionCommand Contains multiple commands to be executed together. Table 24.1. Command attributes Name Description Requirement commands List of commands to be executed. Required lookup Sets the KIE session ID on which the commands will be executed. For stateless KIE sessions, this attribute is required. For stateful KIE sessions, this attribute is optional and if not specified, the default KIE session is used. Required for stateless KIE session, optional for stateful KIE session Note KIE session IDs are in the kmodule.xml file of your Red Hat Decision Manager project. To view or add a KIE session ID in Business Central to use with the lookup command attribute, navigate to the relevant project in Business Central and go to project Settings KIE bases KIE sessions . If no KIE bases exist, click Add KIE base KIE sessions to define the new KIE base and KIE sessions. Example JSON request body { "lookup": "ksession1", "commands": [ { "insert": { "object": { "org.drools.compiler.test.Person": { "name": "john", "age": 25 } } } }, { "fire-all-rules": { "max": 10, "out-identifier": "firedActivations" } } ] } Example Java command InsertObjectCommand insertCommand = new InsertObjectCommand(new Person("john", 25)); FireAllRulesCommand fireCommand = new FireAllRulesCommand(); BatchExecutionCommand batch = new BatchExecutionCommandImpl(Arrays.asList(insertCommand, fireCommand), "ksession1"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": 0, "key": "firedActivations" } ], "facts": [] } } } ] } InsertObjectCommand Inserts an object into the KIE session. Table 24.2. Command attributes Name Description Requirement object The object to be inserted Required out-identifier ID of the FactHandle created from the object insertion and added to the execution results Optional return-object Boolean to determine whether the object must be returned in the execution results (default: true ) Optional entry-point Entry point for the insertion Optional Example JSON request body { "commands": [ { "insert": { "entry-point": "my stream", "object": { "org.drools.compiler.test.Person": { "age": 25, "name": "john" } }, "out-identifier": "john", "return-object": false } } ] } Example Java command Command insertObjectCommand = CommandFactory.newInsert(new Person("john", 25), "john", false, null); ksession.execute(insertObjectCommand); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [], "facts": [ { "value": { "org.drools.core.common.DefaultFactHandle": { "external-form": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } }, "key": "john" } ] } } } ] } RetractCommand Retracts an object from the KIE session. Table 24.3. Command attributes Name Description Requirement fact-handle The FactHandle associated with the object to be retracted Required Example JSON request body { "commands": [ { "retract": { "fact-handle": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } } ] } Example Java command: Use FactHandleFromString RetractCommand retractCommand = new RetractCommand(); retractCommand.setFactHandleFromString("123:234:345:456:567"); Example Java command: Use FactHandle from inserted object RetractCommand retractCommand = new RetractCommand(factHandle); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container employee-rostering successfully called.", "result": { "execution-results": { "results": [], "facts": [] } } } ] } ModifyCommand Modifies a previously inserted object in the KIE session. Table 24.4. Command attributes Name Description Requirement fact-handle The FactHandle associated with the object to be modified Required setters List of setters for object modifications Required Example JSON request body { "commands": [ { "modify": { "fact-handle": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap", "setters": { "accessor": "age", "value": 25 } } } ] } Example Java command ModifyCommand modifyCommand = new ModifyCommand(factHandle); List<Setter> setters = new ArrayList<Setter>(); setters.add(new SetterImpl("age", "25")); modifyCommand.setSetters(setters); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container employee-rostering successfully called.", "result": { "execution-results": { "results": [], "facts": [] } } } ] } GetObjectCommand Retrieves an object from a KIE session. Table 24.5. Command attributes Name Description Requirement fact-handle The FactHandle associated with the object to be retrieved Required out-identifier ID of the FactHandle created from the object insertion and added to the execution results Optional Example JSON request body { "commands": [ { "get-object": { "fact-handle": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap", "out-identifier": "john" } } ] } Example Java command GetObjectCommand getObjectCommand = new GetObjectCommand(); getObjectCommand.setFactHandleFromString("123:234:345:456:567"); getObjectCommand.setOutIdentifier("john"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": null, "key": "john" } ], "facts": [] } } } ] } GetObjectsCommand Retrieves all objects from the KIE session as a collection. Table 24.6. Command attributes Name Description Requirement object-filter Filter for the objects returned from the KIE session Optional out-identifier Identifier to be used in the execution results Optional Example JSON request body { "commands": [ { "get-objects": { "out-identifier": "objects" } } ] } Example Java command GetObjectsCommand getObjectsCommand = new GetObjectsCommand(); getObjectsCommand.setOutIdentifier("objects"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": [ { "org.apache.xerces.dom.ElementNSImpl": "<?xml version=\"1.0\" encoding=\"UTF-16\"?>\n<object xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"person\"><age>25</age><name>john</name>\n <\/object>" }, { "org.drools.compiler.test.Person": { "name": "john", "age": 25 } } ], "key": "objects" } ], "facts": [] } } } ] } InsertElementsCommand Inserts a list of objects into the KIE session. Table 24.7. Command attributes Name Description Requirement objects The list of objects to be inserted into the KIE session Required out-identifier ID of the FactHandle created from the object insertion and added to the execution results Optional return-object Boolean to determine whether the object must be returned in the execution results. Default value: true . Optional entry-point Entry point for the insertion Optional Example JSON request body { "commands": [ { "insert-elements": { "objects": [ { "containedObject": { "@class": "org.drools.compiler.test.Person", "age": 25, "name": "john" } }, { "containedObject": { "@class": "Person", "age": 35, "name": "sarah" } } ] } } ] } Example Java command List<Object> objects = new ArrayList<Object>(); objects.add(new Person("john", 25)); objects.add(new Person("sarah", 35)); Command insertElementsCommand = CommandFactory.newInsertElements(objects); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [], "facts": [ { "value": { "org.drools.core.common.DefaultFactHandle": { "external-form": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } }, "key": "john" }, { "value": { "org.drools.core.common.DefaultFactHandle": { "external-form": "0:4:436792766:-2127720266:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } }, "key": "sarah" } ] } } } ] } FireAllRulesCommand Executes all rules in the KIE session. Table 24.8. Command attributes Name Description Requirement max Maximum number of rules to be executed. The default is -1 and does not put any restriction on execution. Optional out-identifier ID to be used for retrieving the number of fired rules in execution results. Optional agenda-filter Agenda Filter to be used for rule execution. Optional Example JSON request body { "commands" : [ { "fire-all-rules": { "max": 10, "out-identifier": "firedActivations" } } ] } Example Java command FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); fireAllRulesCommand.setMax(10); fireAllRulesCommand.setOutIdentifier("firedActivations"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": 0, "key": "firedActivations" } ], "facts": [] } } } ] } QueryCommand Executes a query defined in the KIE base. Table 24.9. Command attributes Name Description Requirement name Query name. Required out-identifier ID of the query results. The query results are added in the execution results with this identifier. Optional arguments List of objects to be passed as a query parameter. Optional Example JSON request body { "commands": [ { "query": { "name": "persons", "arguments": [], "out-identifier": "persons" } } ] } Example Java command QueryCommand queryCommand = new QueryCommand(); queryCommand.setName("persons"); queryCommand.setOutIdentifier("persons"); Example server response (JSON) { "type": "SUCCESS", "msg": "Container stateful-session successfully called.", "result": { "execution-results": { "results": [ { "value": { "org.drools.core.runtime.rule.impl.FlatQueryResults": { "idFactHandleMaps": { "type": "LIST", "componentType": null, "element": [ { "type": "MAP", "componentType": null, "element": [ { "value": { "org.drools.core.common.DisconnectedFactHandle": { "id": 1, "identityHashCode": 1809949690, "objectHashCode": 1809949690, "recency": 1, "object": { "org.kie.server.testing.Person": { "fullname": "John Doe", "age": 47 } }, "entryPointId": "DEFAULT", "traitType": "NON_TRAIT", "external-form": "0:1:1809949690:1809949690:1:DEFAULT:NON_TRAIT:org.kie.server.testing.Person" } }, "key": "USDperson" } ] } ] }, "idResultMaps": { "type": "LIST", "componentType": null, "element": [ { "type": "MAP", "componentType": null, "element": [ { "value": { "org.kie.server.testing.Person": { "fullname": "John Doe", "age": 47 } }, "key": "USDperson" } ] } ] }, "identifiers": { "type": "SET", "componentType": null, "element": [ "USDperson" ] } } }, "key": "persons" } ], "facts": [] } } } SetGlobalCommand Sets an object to a global state. Table 24.10. Command attributes Name Description Requirement identifier ID of the global variable defined in the KIE base Required object Object to be set into the global variable Optional out Boolean to exclude the global variable you set from the execution results Optional out-identifier ID of the global execution result Optional Example JSON request body { "commands": [ { "set-global": { "identifier": "helper", "object": { "org.kie.server.testing.Person": { "fullname": "kyle", "age": 30 } }, "out-identifier": "output" } } ] } Example Java command SetGlobalCommand setGlobalCommand = new SetGlobalCommand(); setGlobalCommand.setIdentifier("helper"); setGlobalCommand.setObject(new Person("kyle", 30)); setGlobalCommand.setOut(true); setGlobalCommand.setOutIdentifier("output"); Example server response (JSON) { "type": "SUCCESS", "msg": "Container stateful-session successfully called.", "result": { "execution-results": { "results": [ { "value": { "org.kie.server.testing.Person": { "fullname": "kyle", "age": 30 } }, "key": "output" } ], "facts": [] } } } GetGlobalCommand Retrieves a previously defined global object. Table 24.11. Command attributes Name Description Requirement identifier ID of the global variable defined in the KIE base Required out-identifier ID to be used in the execution results Optional Example JSON request body { "commands": [ { "get-global": { "identifier": "helper", "out-identifier": "helperOutput" } } ] } Example Java command GetGlobalCommand getGlobalCommand = new GetGlobalCommand(); getGlobalCommand.setIdentifier("helper"); getGlobalCommand.setOutIdentifier("helperOutput"); Example server response (JSON) { "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": null, "key": "helperOutput" } ], "facts": [] } } } ] } | [
"{ \"lookup\": \"ksession1\", \"commands\": [ { \"insert\": { \"object\": { \"org.drools.compiler.test.Person\": { \"name\": \"john\", \"age\": 25 } } } }, { \"fire-all-rules\": { \"max\": 10, \"out-identifier\": \"firedActivations\" } } ] }",
"InsertObjectCommand insertCommand = new InsertObjectCommand(new Person(\"john\", 25)); FireAllRulesCommand fireCommand = new FireAllRulesCommand(); BatchExecutionCommand batch = new BatchExecutionCommandImpl(Arrays.asList(insertCommand, fireCommand), \"ksession1\");",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": { \"execution-results\": { \"results\": [ { \"value\": 0, \"key\": \"firedActivations\" } ], \"facts\": [] } } } ] }",
"{ \"commands\": [ { \"insert\": { \"entry-point\": \"my stream\", \"object\": { \"org.drools.compiler.test.Person\": { \"age\": 25, \"name\": \"john\" } }, \"out-identifier\": \"john\", \"return-object\": false } } ] }",
"Command insertObjectCommand = CommandFactory.newInsert(new Person(\"john\", 25), \"john\", false, null); ksession.execute(insertObjectCommand);",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": { \"execution-results\": { \"results\": [], \"facts\": [ { \"value\": { \"org.drools.core.common.DefaultFactHandle\": { \"external-form\": \"0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap\" } }, \"key\": \"john\" } ] } } } ] }",
"{ \"commands\": [ { \"retract\": { \"fact-handle\": \"0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap\" } } ] }",
"RetractCommand retractCommand = new RetractCommand(); retractCommand.setFactHandleFromString(\"123:234:345:456:567\");",
"RetractCommand retractCommand = new RetractCommand(factHandle);",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container employee-rostering successfully called.\", \"result\": { \"execution-results\": { \"results\": [], \"facts\": [] } } } ] }",
"{ \"commands\": [ { \"modify\": { \"fact-handle\": \"0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap\", \"setters\": { \"accessor\": \"age\", \"value\": 25 } } } ] }",
"ModifyCommand modifyCommand = new ModifyCommand(factHandle); List<Setter> setters = new ArrayList<Setter>(); setters.add(new SetterImpl(\"age\", \"25\")); modifyCommand.setSetters(setters);",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container employee-rostering successfully called.\", \"result\": { \"execution-results\": { \"results\": [], \"facts\": [] } } } ] }",
"{ \"commands\": [ { \"get-object\": { \"fact-handle\": \"0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap\", \"out-identifier\": \"john\" } } ] }",
"GetObjectCommand getObjectCommand = new GetObjectCommand(); getObjectCommand.setFactHandleFromString(\"123:234:345:456:567\"); getObjectCommand.setOutIdentifier(\"john\");",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": { \"execution-results\": { \"results\": [ { \"value\": null, \"key\": \"john\" } ], \"facts\": [] } } } ] }",
"{ \"commands\": [ { \"get-objects\": { \"out-identifier\": \"objects\" } } ] }",
"GetObjectsCommand getObjectsCommand = new GetObjectsCommand(); getObjectsCommand.setOutIdentifier(\"objects\");",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": { \"execution-results\": { \"results\": [ { \"value\": [ { \"org.apache.xerces.dom.ElementNSImpl\": \"<?xml version=\\\"1.0\\\" encoding=\\\"UTF-16\\\"?>\\n<object xmlns:xsi=\\\"http://www.w3.org/2001/XMLSchema-instance\\\" xsi:type=\\\"person\\\"><age>25</age><name>john</name>\\n <\\/object>\" }, { \"org.drools.compiler.test.Person\": { \"name\": \"john\", \"age\": 25 } } ], \"key\": \"objects\" } ], \"facts\": [] } } } ] }",
"{ \"commands\": [ { \"insert-elements\": { \"objects\": [ { \"containedObject\": { \"@class\": \"org.drools.compiler.test.Person\", \"age\": 25, \"name\": \"john\" } }, { \"containedObject\": { \"@class\": \"Person\", \"age\": 35, \"name\": \"sarah\" } } ] } } ] }",
"List<Object> objects = new ArrayList<Object>(); objects.add(new Person(\"john\", 25)); objects.add(new Person(\"sarah\", 35)); Command insertElementsCommand = CommandFactory.newInsertElements(objects);",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": { \"execution-results\": { \"results\": [], \"facts\": [ { \"value\": { \"org.drools.core.common.DefaultFactHandle\": { \"external-form\": \"0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap\" } }, \"key\": \"john\" }, { \"value\": { \"org.drools.core.common.DefaultFactHandle\": { \"external-form\": \"0:4:436792766:-2127720266:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap\" } }, \"key\": \"sarah\" } ] } } } ] }",
"{ \"commands\" : [ { \"fire-all-rules\": { \"max\": 10, \"out-identifier\": \"firedActivations\" } } ] }",
"FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); fireAllRulesCommand.setMax(10); fireAllRulesCommand.setOutIdentifier(\"firedActivations\");",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": { \"execution-results\": { \"results\": [ { \"value\": 0, \"key\": \"firedActivations\" } ], \"facts\": [] } } } ] }",
"{ \"commands\": [ { \"query\": { \"name\": \"persons\", \"arguments\": [], \"out-identifier\": \"persons\" } } ] }",
"QueryCommand queryCommand = new QueryCommand(); queryCommand.setName(\"persons\"); queryCommand.setOutIdentifier(\"persons\");",
"{ \"type\": \"SUCCESS\", \"msg\": \"Container stateful-session successfully called.\", \"result\": { \"execution-results\": { \"results\": [ { \"value\": { \"org.drools.core.runtime.rule.impl.FlatQueryResults\": { \"idFactHandleMaps\": { \"type\": \"LIST\", \"componentType\": null, \"element\": [ { \"type\": \"MAP\", \"componentType\": null, \"element\": [ { \"value\": { \"org.drools.core.common.DisconnectedFactHandle\": { \"id\": 1, \"identityHashCode\": 1809949690, \"objectHashCode\": 1809949690, \"recency\": 1, \"object\": { \"org.kie.server.testing.Person\": { \"fullname\": \"John Doe\", \"age\": 47 } }, \"entryPointId\": \"DEFAULT\", \"traitType\": \"NON_TRAIT\", \"external-form\": \"0:1:1809949690:1809949690:1:DEFAULT:NON_TRAIT:org.kie.server.testing.Person\" } }, \"key\": \"USDperson\" } ] } ] }, \"idResultMaps\": { \"type\": \"LIST\", \"componentType\": null, \"element\": [ { \"type\": \"MAP\", \"componentType\": null, \"element\": [ { \"value\": { \"org.kie.server.testing.Person\": { \"fullname\": \"John Doe\", \"age\": 47 } }, \"key\": \"USDperson\" } ] } ] }, \"identifiers\": { \"type\": \"SET\", \"componentType\": null, \"element\": [ \"USDperson\" ] } } }, \"key\": \"persons\" } ], \"facts\": [] } } }",
"{ \"commands\": [ { \"set-global\": { \"identifier\": \"helper\", \"object\": { \"org.kie.server.testing.Person\": { \"fullname\": \"kyle\", \"age\": 30 } }, \"out-identifier\": \"output\" } } ] }",
"SetGlobalCommand setGlobalCommand = new SetGlobalCommand(); setGlobalCommand.setIdentifier(\"helper\"); setGlobalCommand.setObject(new Person(\"kyle\", 30)); setGlobalCommand.setOut(true); setGlobalCommand.setOutIdentifier(\"output\");",
"{ \"type\": \"SUCCESS\", \"msg\": \"Container stateful-session successfully called.\", \"result\": { \"execution-results\": { \"results\": [ { \"value\": { \"org.kie.server.testing.Person\": { \"fullname\": \"kyle\", \"age\": 30 } }, \"key\": \"output\" } ], \"facts\": [] } } }",
"{ \"commands\": [ { \"get-global\": { \"identifier\": \"helper\", \"out-identifier\": \"helperOutput\" } } ] }",
"GetGlobalCommand getGlobalCommand = new GetGlobalCommand(); getGlobalCommand.setIdentifier(\"helper\"); getGlobalCommand.setOutIdentifier(\"helperOutput\");",
"{ \"response\": [ { \"type\": \"SUCCESS\", \"msg\": \"Container command-script-container successfully called.\", \"result\": { \"execution-results\": { \"results\": [ { \"value\": null, \"key\": \"helperOutput\" } ], \"facts\": [] } } } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/runtime-commands-con_kie-apis |
Chapter 9. Detach volumes after non-graceful node shutdown | Chapter 9. Detach volumes after non-graceful node shutdown This feature allows drivers to automatically detach volumes when a node goes down non-gracefully. Important Detach CSI volumes after non-graceful node shutdown is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.1. Overview A graceful node shutdown occurs when the kubelet's node shutdown manager detects the upcoming node shutdown action. Non-graceful shutdowns occur when the kubelet does not detect a node shutdown action, which can occur because of system or hardware failures. Also, the kubelet may not detect a node shutdown action when the shutdown command does not trigger the Inhibitor Locks mechanism used by the kubelet on Linux, or because of a user error, for example, if the shutdownGracePeriod and shutdownGracePeriodCriticalPods details are not configured correctly for that node. With this feature, when a non-graceful node shutdown occurs, you can manually add an out-of-service taint on the node to allow volumes to automatically detach from the node. 9.2. Adding an out-of-service taint manually for automatic volume detachment Prerequisites Access to the cluster with cluster-admin privileges. Procedure To allow volumes to detach automatically from a node after a non-graceful node shutdown: After a node is detected as unhealthy, shut down the worker node. Ensure that the node is shutdown by running the following command and checking the status: oc get node <node name> 1 1 <node name> = name of the non-gracefully shutdown node Important If the node is not completely shut down, do not proceed with tainting the node. If the node is still up and the taint is applied, filesystem corruption can occur. Taint the corresponding node object by running the following command: Important Tainting a node this way deletes all pods on that node. This also causes any pods that are backed by statefulsets to be evicted, and replacement pods to be created on a different node. oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1 1 <node name> = name of the non-gracefully shutdown node After the taint is applied, the volumes detach from the shutdown node allowing their disks to be attached to a different node. Example The resulting YAML file resembles the following: spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown Restart the node. Remove the taint. | [
"get node <node name> 1",
"adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1",
"spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage/ephemeral-storage-csi-vol-detach-non-graceful-shutdown |
Chapter 11. Installing a cluster on AWS into a Secret or Top Secret Region | Chapter 11. Installing a cluster on AWS into a Secret or Top Secret Region In OpenShift Container Platform version 4.12, you can install a cluster on Amazon Web Services (AWS) into the following secret regions: Secret Commercial Cloud Services (SC2S) Commercial Cloud Services (C2S) To configure a cluster in either region, you change parameters in the install config.yaml file before you install the cluster. 11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multifactor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 11.2. AWS secret regions The following AWS secret partitions are supported: us-isob-east-1 (SC2S) us-iso-east-1 (C2S) Note The maximum supported MTU in an AWS SC2S and C2S Regions is not the same as AWS commercial. For more information about configuring MTU during installation, see the Cluster Network Operator configuration object section in Installing a cluster on AWS with network customizations 11.3. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amzaon Machine Image for the AWS Secret and Top Secret Regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. Important You must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file. 11.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS Top Secret Region. Therefore, clusters must be private if they are deployed to an AWS Top Secret Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 11.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 11.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 11.5. About using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 11.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. A cluster in an SC2S or C2S Region is unable to reach the public IP addresses for the EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 11.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 11.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 11.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 11.6. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.7. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.12.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 11.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 11.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 11.10. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 11.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 11.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 11.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 11.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 11.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 11.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 11.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings platform.aws.lbType Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 11.10.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 11.4. Optional AWS parameters Parameter Description Values compute.platform.aws.amiID The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. compute.platform.aws.iamRole A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. compute.platform.aws.rootVolume.iops The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . compute.platform.aws.rootVolume.size The size in GiB of the root volume. Integer, for example 500 . compute.platform.aws.rootVolume.type The type of the root volume. Valid AWS EBS volume type , such as io1 . compute.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . compute.platform.aws.type The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. compute.platform.aws.zones The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . compute.aws.region The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. controlPlane.platform.aws.amiID The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. controlPlane.platform.aws.iamRole A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. controlPlane.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . controlPlane.platform.aws.type The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. controlPlane.platform.aws.zones The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . controlPlane.aws.region The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . platform.aws.amiID The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. platform.aws.hostedZone An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . platform.aws.serviceEndpoints.name The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name. platform.aws.serviceEndpoints.url The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate. Valid AWS service endpoint URL. platform.aws.userTags A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. platform.aws.propagateUserTags A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . platform.aws.subnets If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. Valid subnet IDs. 11.10.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 11.1. Machine types based on 64-bit x86 architecture for secret regions c4.* c5.* i3.* m4.* m5.* r4.* r5.* t3.* 11.10.3. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 25 The custom CA certificate. This is required when deploying to the SC2S or C2S Regions because the AWS API requires a custom CA trust bundle. 11.10.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 11.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 11.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.14. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 11.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 11.16. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . | [
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-aws-secret-region |
GitOps | GitOps OpenShift Container Platform 4.12 A declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/gitops/index |
Chapter 1. The OpenStack Client | Chapter 1. The OpenStack Client The openstack client is a common OpenStack command-line interface (CLI). This chapter documents the main options for openstack version 4.0.2. Command-line interface to the OpenStack APIs Usage: Table 1.1. Command arguments Value Summary --version Show program's version number and exit -v, --verbose Increase verbosity of output. can be repeated. -q, --quiet Suppress output except warnings and errors. --log-file LOG_FILE Specify a file to log output. disabled by default. -h, --help Show help message and exit. --debug Show tracebacks on errors. --os-cloud <cloud-config-name> Cloud name in clouds.yaml (env: os_cloud) --os-region-name <auth-region-name> Authentication region name (env: os_region_name) --os-cacert <ca-bundle-file> Ca certificate bundle file (env: os_cacert) --os-cert <certificate-file> Client certificate bundle file (env: os_cert) --os-key <key-file> Client certificate key file (env: os_key) --verify Verify server certificate (default) --insecure Disable server certificate verification --os-default-domain <auth-domain> Default domain id, default=default. (env: OS_DEFAULT_DOMAIN) --os-interface <interface> Select an interface type. valid interface types: [admin, public, internal]. default=public, (Env: OS_INTERFACE) --os-service-provider <service_provider> Authenticate with and perform the command on a service provider using Keystone-to-keystone federation. Must also specify the remote project option. --os-remote-project-name <remote_project_name> Project name when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-id <remote_project_id> Project id when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-name <remote_project_domain_name> Domain name of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --os-remote-project-domain-id <remote_project_domain_id> Domain id of the project when authenticating to a service provider if using Keystone-to-Keystone federation. --timing Print api call timing info --os-beta-command Enable beta commands which are subject to change --os-profile hmac-key Hmac key for encrypting profiling context data --os-compute-api-version <compute-api-version> Compute api version, default=2.1 (env: OS_COMPUTE_API_VERSION) --os-identity-api-version <identity-api-version> Identity api version, default=3 (env: OS_IDENTITY_API_VERSION) --os-image-api-version <image-api-version> Image api version, default=2 (env: OS_IMAGE_API_VERSION) --os-network-api-version <network-api-version> Network api version, default=2.0 (env: OS_NETWORK_API_VERSION) --os-object-api-version <object-api-version> Object api version, default=1 (env: OS_OBJECT_API_VERSION) --os-volume-api-version <volume-api-version> Volume api version, default=3 (env: OS_VOLUME_API_VERSION) --os-queues-api-version <queues-api-version> Queues api version, default=2 (env: OS_QUEUES_API_VERSION) --os-database-api-version <database-api-version> Database api version, default=1 (env: OS_DATABASE_API_VERSION) --os-tripleoclient-api-version <tripleoclient-api-version> Tripleo client api version, default=1 (env: OS_TRIPLEOCLIENT_API_VERSION) --os-data-processing-api-version <data-processing-api-version> Data processing api version, default=1.1 (env: OS_DATA_PROCESSING_API_VERSION) --os-data-processing-url OS_DATA_PROCESSING_URL Data processing api url, (env: OS_DATA_PROCESSING_API_URL) --os-loadbalancer-api-version <loadbalancer-api-version> Osc plugin api version, default=2.0 (env: OS_LOADBALANCER_API_VERSION) --os-workflow-api-version <workflow-api-version> Workflow api version, default=2 (env: OS_WORKFLOW_API_VERSION) --os-container-infra-api-version <container-infra-api-version> Container-infra api version, default=1 (env: OS_CONTAINER_INFRA_API_VERSION) --os-baremetal-api-version <baremetal-api-version> Bare metal api version, default="latest" (the maximum version supported by both the client and the server). (Env: OS_BAREMETAL_API_VERSION) --inspector-api-version INSPECTOR_API_VERSION Inspector api version, only 1 is supported now (env: INSPECTOR_VERSION). --inspector-url INSPECTOR_URL Inspector url, defaults to localhost (env: INSPECTOR_URL). --os-orchestration-api-version <orchestration-api-version> Orchestration api version, default=1 (env: OS_ORCHESTRATION_API_VERSION) --os-dns-api-version <dns-api-version> Dns api version, default=2 (env: os_dns_api_version) --os-key-manager-api-version <key-manager-api-version> Barbican api version, default=1 (env: OS_KEY_MANAGER_API_VERSION) --os-metrics-api-version <metrics-api-version> Metrics api version, default=1 (env: OS_METRICS_API_VERSION) --os-alarming-api-version <alarming-api-version> Queues api version, default=2 (env: OS_ALARMING_API_VERSION) --os-auth-type <auth-type> Select an authentication type. available types: v2password, aodh-noauth, v3oidcaccesstoken, token, v3adfspassword, v3token, v3applicationcredential, v3totp, v3oidcauthcode, noauth, v3multifactor, password, v3password, v3oidcclientcredentials, gnocchi-noauth, v3oidcpassword, v2token, gnocchi- basic, v3tokenlessauth, v1password, v3samlpassword, none, v3oauth1, admin_token. Default: selected based on --os-username/--os-token (Env: OS_AUTH_TYPE) --os-auth-url <auth-auth-url> With v2password: authentication url with v3oidcaccesstoken: Authentication URL With token: Authentication URL With v3adfspassword: Authentication URL With v3token: Authentication URL With v3applicationcredential: Authentication URL With v3totp: Authentication URL With v3oidcauthcode: Authentication URL With v3multifactor: Authentication URL With password: Authentication URL With v3password: Authentication URL With v3oidcclientcredentials: Authentication URL With v3oidcpassword: Authentication URL With v2token: Authentication URL With v3tokenlessauth: Authentication URL With v1password: Authentication URL With v3samlpassword: Authentication URL With v3oauth1: Authentication URL (Env: OS_AUTH_URL) --os-trust-id <auth-trust-id> With v2password: trust id with v3oidcaccesstoken: Trust ID With token: Trust ID With v3adfspassword: Trust ID With v3token: Trust ID With v3applicationcredential: Trust ID With v3totp: Trust ID With v3oidcauthcode: Trust ID With v3multifactor: Trust ID With password: Trust ID With v3password: Trust ID With v3oidcclientcredentials: Trust ID With v3oidcpassword: Trust ID With v2token: Trust ID With v3samlpassword: Trust ID (Env: OS_TRUST_ID) --os-username <auth-username> With v2password: username to login with with v3adfspassword: Username With v3applicationcredential: Username With v3totp: Username With password: Username With v3password: Username With v3oidcpassword: Username With v1password: Username to login with With v3samlpassword: Username (Env: OS_USERNAME) --os-user-id <auth-user-id> With v2password: user id to login with with aodh- noauth: User ID With v3applicationcredential: User ID With v3totp: User ID With noauth: User ID With password: User id With v3password: User ID With gnocchi-noauth: User ID (Env: OS_USER_ID) --os-password <auth-password> With v2password: password to use with v3adfspassword: Password With password: User's password With v3password: User's password With v3oidcpassword: Password With v1password: Password to use With v3samlpassword: Password (Env: OS_PASSWORD) --os-project-id <auth-project-id> With aodh-noauth: project id with v3oidcaccesstoken: Project ID to scope to With token: Project ID to scope to With v3adfspassword: Project ID to scope to With v3token: Project ID to scope to With v3applicationcredential: Project ID to scope to With v3totp: Project ID to scope to With v3oidcauthcode: Project ID to scope to With noauth: Project ID With v3multifactor: Project ID to scope to With password: Project ID to scope to With v3password: Project ID to scope to With v3oidcclientcredentials: Project ID to scope to With gnocchi-noauth: Project ID With v3oidcpassword: Project ID to scope to With v3tokenlessauth: Project ID to scope to With v3samlpassword: Project ID to scope to (Env: OS_PROJECT_ID) --os-roles <auth-roles> With aodh-noauth: roles with gnocchi-noauth: roles (Env: OS_ROLES) --os-aodh-endpoint <auth-aodh-endpoint> With aodh-noauth: aodh endpoint (env: OS_AODH_ENDPOINT) --os-system-scope <auth-system-scope> With v3oidcaccesstoken: scope for system operations With token: Scope for system operations With v3adfspassword: Scope for system operations With v3token: Scope for system operations With v3applicationcredential: Scope for system operations With v3totp: Scope for system operations With v3oidcauthcode: Scope for system operations With v3multifactor: Scope for system operations With password: Scope for system operations With v3password: Scope for system operations With v3oidcclientcredentials: Scope for system operations With v3oidcpassword: Scope for system operations With v3samlpassword: Scope for system operations (Env: OS_SYSTEM_SCOPE) --os-domain-id <auth-domain-id> With v3oidcaccesstoken: domain id to scope to with token: Domain ID to scope to With v3adfspassword: Domain ID to scope to With v3token: Domain ID to scope to With v3applicationcredential: Domain ID to scope to With v3totp: Domain ID to scope to With v3oidcauthcode: Domain ID to scope to With v3multifactor: Domain ID to scope to With password: Domain ID to scope to With v3password: Domain ID to scope to With v3oidcclientcredentials: Domain ID to scope to With v3oidcpassword: Domain ID to scope to With v3tokenlessauth: Domain ID to scope to With v3samlpassword: Domain ID to scope to (Env: OS_DOMAIN_ID) --os-domain-name <auth-domain-name> With v3oidcaccesstoken: domain name to scope to with token: Domain name to scope to With v3adfspassword: Domain name to scope to With v3token: Domain name to scope to With v3applicationcredential: Domain name to scope to With v3totp: Domain name to scope to With v3oidcauthcode: Domain name to scope to With v3multifactor: Domain name to scope to With password: Domain name to scope to With v3password: Domain name to scope to With v3oidcclientcredentials: Domain name to scope to With v3oidcpassword: Domain name to scope to With v3tokenlessauth: Domain name to scope to With v3samlpassword: Domain name to scope to (Env: OS_DOMAIN_NAME) --os-project-name <auth-project-name> With v3oidcaccesstoken: project name to scope to with token: Project name to scope to With v3adfspassword: Project name to scope to With v3token: Project name to scope to With v3applicationcredential: Project name to scope to With v3totp: Project name to scope to With v3oidcauthcode: Project name to scope to With v3multifactor: Project name to scope to With password: Project name to scope to With v3password: Project name to scope to With v3oidcclientcredentials: Project name to scope to With v3oidcpassword: Project name to scope to With v3tokenlessauth: Project name to scope to With v1password: Swift account to use With v3samlpassword: Project name to scope to (Env: OS_PROJECT_NAME) --os-project-domain-id <auth-project-domain-id> With v3oidcaccesstoken: domain id containing project With token: Domain ID containing project With v3adfspassword: Domain ID containing project With v3token: Domain ID containing project With v3applicationcredential: Domain ID containing project With v3totp: Domain ID containing project With v3oidcauthcode: Domain ID containing project With v3multifactor: Domain ID containing project With password: Domain ID containing project With v3password: Domain ID containing project With v3oidcclientcredentials: Domain ID containing project With v3oidcpassword: Domain ID containing project With v3tokenlessauth: Domain ID containing project With v3samlpassword: Domain ID containing project (Env: OS_PROJECT_DOMAIN_ID) --os-project-domain-name <auth-project-domain-name> With v3oidcaccesstoken: domain name containing project With token: Domain name containing project With v3adfspassword: Domain name containing project With v3token: Domain name containing project With v3applicationcredential: Domain name containing project With v3totp: Domain name containing project With v3oidcauthcode: Domain name containing project With v3multifactor: Domain name containing project With password: Domain name containing project With v3password: Domain name containing project With v3oidcclientcredentials: Domain name containing project With v3oidcpassword: Domain name containing project With v3tokenlessauth: Domain name containing project With v3samlpassword: Domain name containing project (Env: OS_PROJECT_DOMAIN_NAME) --os-identity-provider <auth-identity-provider> With v3oidcaccesstoken: identity provider's name with v3adfspassword: Identity Provider's name With v3oidcauthcode: Identity Provider's name With v3oidcclientcredentials: Identity Provider's name With v3oidcpassword: Identity Provider's name With v3samlpassword: Identity Provider's name (Env: OS_IDENTITY_PROVIDER) --os-protocol <auth-protocol> With v3oidcaccesstoken: protocol for federated plugin With v3adfspassword: Protocol for federated plugin With v3oidcauthcode: Protocol for federated plugin With v3oidcclientcredentials: Protocol for federated plugin With v3oidcpassword: Protocol for federated plugin With v3samlpassword: Protocol for federated plugin (Env: OS_PROTOCOL) --os-access-token <auth-access-token> With v3oidcaccesstoken: oauth 2.0 access token (env: OS_ACCESS_TOKEN) --os-default-domain-id <auth-default-domain-id> With token: optional domain id to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With password: Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_ID) --os-default-domain-name <auth-default-domain-name> With token: optional domain name to use with v3 api and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. With password: Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. (Env: OS_DEFAULT_DOMAIN_NAME) --os-token <auth-token> With token: token to authenticate with with v3token: Token to authenticate with With v2token: Token With admin_token: The token that will always be used (Env: OS_TOKEN) --os-identity-provider-url <auth-identity-provider-url> With v3adfspassword: an identity provider url, where the SAML authentication request will be sent. With v3samlpassword: An Identity Provider URL, where the SAML2 authentication request will be sent. (Env: OS_IDENTITY_PROVIDER_URL) --os-service-provider-endpoint <auth-service-provider-endpoint> With v3adfspassword: service provider's endpoint (env: OS_SERVICE_PROVIDER_ENDPOINT) --os-service-provider-entity-id <auth-service-provider-entity-id> With v3adfspassword: service provider's saml entity id (Env: OS_SERVICE_PROVIDER_ENTITY_ID) --os-user-domain-id <auth-user-domain-id> With v3applicationcredential: user's domain id with v3totp: User's domain id With password: User's domain id With v3password: User's domain id (Env: OS_USER_DOMAIN_ID) --os-user-domain-name <auth-user-domain-name> With v3applicationcredential: user's domain name with v3totp: User's domain name With password: User's domain name With v3password: User's domain name (Env: OS_USER_DOMAIN_NAME) --os-application-credential-secret <auth-application-credential-secret> With v3applicationcredential: application credential auth secret (Env: OS_APPLICATION_CREDENTIAL_SECRET) --os-application-credential-id <auth-application-credential-id> With v3applicationcredential: application credential ID (Env: OS_APPLICATION_CREDENTIAL_ID) --os-application-credential-name <auth-application-credential-name> With v3applicationcredential: application credential name (Env: OS_APPLICATION_CREDENTIAL_NAME) --os-passcode <auth-passcode> With v3totp: user's totp passcode (env: os_passcode) --os-client-id <auth-client-id> With v3oidcauthcode: oauth 2.0 client id with v3oidcclientcredentials: OAuth 2.0 Client ID With v3oidcpassword: OAuth 2.0 Client ID (Env: OS_CLIENT_ID) --os-client-secret <auth-client-secret> With v3oidcauthcode: oauth 2.0 client secret with v3oidcclientcredentials: OAuth 2.0 Client Secret With v3oidcpassword: OAuth 2.0 Client Secret (Env: OS_CLIENT_SECRET) --os-openid-scope <auth-openid-scope> With v3oidcauthcode: openid connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcclientcredentials: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. With v3oidcpassword: OpenID Connect scope that is requested from authorization server. Note that the OpenID Connect specification states that "openid" must be always specified. (Env: OS_OPENID_SCOPE) --os-access-token-endpoint <auth-access-token-endpoint> With v3oidcauthcode: openid connect provider token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcclientcredentials: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. With v3oidcpassword: OpenID Connect Provider Token Endpoint. Note that if a discovery document is being passed this option will override the endpoint provided by the server in the discovery document. (Env: OS_ACCESS_TOKEN_ENDPOINT) --os-discovery-endpoint <auth-discovery-endpoint> With v3oidcauthcode: openid connect discovery document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well-known/openid- configuration With v3oidcclientcredentials: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well- known/openid-configuration With v3oidcpassword: OpenID Connect Discovery Document URL. The discovery document will be used to obtain the values of the access token endpoint and the authentication endpoint. This URL should look like https://idp.example.org/.well- known/openid-configuration (Env: OS_DISCOVERY_ENDPOINT) --os-access-token-type <auth-access-token-type> With v3oidcauthcode: oauth 2.0 authorization server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcclientcredentials: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" With v3oidcpassword: OAuth 2.0 Authorization Server Introspection token type, it is used to decide which type of token will be used when processing token introspection. Valid values are: "access_token" or "id_token" (Env: OS_ACCESS_TOKEN_TYPE) --os-redirect-uri <auth-redirect-uri> With v3oidcauthcode: openid connect redirect url (env: OS_REDIRECT_URI) --os-code <auth-code> With v3oidcauthcode: oauth 2.0 authorization code (Env: OS_CODE) --os-endpoint <auth-endpoint> With noauth: cinder endpoint with gnocchi-noauth: Gnocchi endpoint With gnocchi-basic: Gnocchi endpoint With none: The endpoint that will always be used With admin_token: The endpoint that will always be used (Env: OS_ENDPOINT) --os-auth-methods <auth-auth-methods> With v3multifactor: methods to authenticate with. (Env: OS_AUTH_METHODS) --os-user <auth-user> With gnocchi-basic: user (env: os_user) --os-consumer-key <auth-consumer-key> With v3oauth1: oauth consumer id/key (env: OS_CONSUMER_KEY) --os-consumer-secret <auth-consumer-secret> With v3oauth1: oauth consumer secret (env: OS_CONSUMER_SECRET) --os-access-key <auth-access-key> With v3oauth1: oauth access key (env: os_access_key) --os-access-secret <auth-access-secret> With v3oauth1: oauth access secret (env: OS_ACCESS_SECRET) | [
"openstack [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug] [--os-cloud <cloud-config-name>] [--os-region-name <auth-region-name>] [--os-cacert <ca-bundle-file>] [--os-cert <certificate-file>] [--os-key <key-file>] [--verify | --insecure] [--os-default-domain <auth-domain>] [--os-interface <interface>] [--os-service-provider <service_provider>] [--os-remote-project-name <remote_project_name> | --os-remote-project-id <remote_project_id>] [--os-remote-project-domain-name <remote_project_domain_name> | --os-remote-project-domain-id <remote_project_domain_id>] [--timing] [--os-beta-command] [--os-profile hmac-key] [--os-compute-api-version <compute-api-version>] [--os-identity-api-version <identity-api-version>] [--os-image-api-version <image-api-version>] [--os-network-api-version <network-api-version>] [--os-object-api-version <object-api-version>] [--os-volume-api-version <volume-api-version>] [--os-queues-api-version <queues-api-version>] [--os-database-api-version <database-api-version>] [--os-tripleoclient-api-version <tripleoclient-api-version>] [--os-data-processing-api-version <data-processing-api-version>] [--os-data-processing-url OS_DATA_PROCESSING_URL] [--os-loadbalancer-api-version <loadbalancer-api-version>] [--os-workflow-api-version <workflow-api-version>] [--os-container-infra-api-version <container-infra-api-version>] [--os-baremetal-api-version <baremetal-api-version>] [--inspector-api-version INSPECTOR_API_VERSION] [--inspector-url INSPECTOR_URL] [--os-orchestration-api-version <orchestration-api-version>] [--os-dns-api-version <dns-api-version>] [--os-key-manager-api-version <key-manager-api-version>] [--os-metrics-api-version <metrics-api-version>] [--os-alarming-api-version <alarming-api-version>] [--os-auth-type <auth-type>] [--os-auth-url <auth-auth-url>] [--os-trust-id <auth-trust-id>] [--os-username <auth-username>] [--os-user-id <auth-user-id>] [--os-password <auth-password>] [--os-project-id <auth-project-id>] [--os-roles <auth-roles>] [--os-aodh-endpoint <auth-aodh-endpoint>] [--os-system-scope <auth-system-scope>] [--os-domain-id <auth-domain-id>] [--os-domain-name <auth-domain-name>] [--os-project-name <auth-project-name>] [--os-project-domain-id <auth-project-domain-id>] [--os-project-domain-name <auth-project-domain-name>] [--os-identity-provider <auth-identity-provider>] [--os-protocol <auth-protocol>] [--os-access-token <auth-access-token>] [--os-default-domain-id <auth-default-domain-id>] [--os-default-domain-name <auth-default-domain-name>] [--os-token <auth-token>] [--os-identity-provider-url <auth-identity-provider-url>] [--os-service-provider-endpoint <auth-service-provider-endpoint>] [--os-service-provider-entity-id <auth-service-provider-entity-id>] [--os-user-domain-id <auth-user-domain-id>] [--os-user-domain-name <auth-user-domain-name>] [--os-application-credential-secret <auth-application-credential-secret>] [--os-application-credential-id <auth-application-credential-id>] [--os-application-credential-name <auth-application-credential-name>] [--os-passcode <auth-passcode>] [--os-client-id <auth-client-id>] [--os-client-secret <auth-client-secret>] [--os-openid-scope <auth-openid-scope>] [--os-access-token-endpoint <auth-access-token-endpoint>] [--os-discovery-endpoint <auth-discovery-endpoint>] [--os-access-token-type <auth-access-token-type>] [--os-redirect-uri <auth-redirect-uri>] [--os-code <auth-code>] [--os-endpoint <auth-endpoint>] [--os-auth-methods <auth-auth-methods>] [--os-user <auth-user>] [--os-consumer-key <auth-consumer-key>] [--os-consumer-secret <auth-consumer-secret>] [--os-access-key <auth-access-key>] [--os-access-secret <auth-access-secret>]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/the_openstack_client |
Appendix A. NetKVM Driver Parameters | Appendix A. NetKVM Driver Parameters After the NetKVM driver is installed, you can configure it to better suit your environment. The parameters listed in this section can be configured in the Windows Device Manager ( devmgmt.msc ). Important Modifying the driver's parameters causes Windows to reload that driver. This interrupts existing network activity. Procedure A.1. Configuring NetKVM Parameters Open Device Manager Click on the Start button. In the right-hand pane, right-click on Computer , and click Manage . If prompted, click Continue on the User Account Control window. This opens the Computer Management window. In the left-hand pane of the Computer Management window, click Device Manager . Locate the correct device In the central pane of the Computer Management window, click on the + symbol beside Network adapters . Under the list of Red Hat VirtIO Ethernet Adapter devices, double-click on NetKVM . This opens the Properties window for that device. View device parameters In the Properties window, click on the Advanced tab. Modify device parameters Click on the parameter you wish to modify to display the options for that parameter. Modify the options as appropriate, then click on OK to save your changes. A.1. Configurable Parameters for NetKVM Logging parameters Logging.Enable A Boolean value that determines whether logging is enabled. The default value is 1 (enabled). Logging.Level An integer that defines the logging level. As the integer increases, so does the verbosity of the log. The default value is 0 (errors only). 1-2 adds configuration messages. 3-4 adds packet flow information. 5-6 adds interrupt and DPC level trace information. Important High logging levels will slow down your guest virtual machine. Logging.Statistics(sec) An integer that defines whether log statistics are printed, and the time in seconds between each periodical statistics printout. The default value is 0 (no logging statistics). Initial parameters Assign MAC A string that defines the locally-administered MAC address for the paravirtualized NIC. This is not set by default. Init.ConnectionRate(Mb) An integer that represents the connection rate in megabytes. The default value for Windows 2008 and later is 10000 . Init.Do802.1PQ A Boolean value that enables Priority/VLAN tag population and removal support. The default value is 1 (enabled). Init.UseMergedBuffers A Boolean value that enables merge-able RX buffers. The default value is 1 (enabled). Init.UsePublishEvents A Boolean value that enables published event use. The default value is 1 (enabled). Init.MTUSize An integer that defines the maximum transmission unit (MTU). The default value is 1500 . Any value from 500 to 65500 is acceptable. Init.IndirectTx Controls whether indirect ring descriptors are in use. The default value is Disable , which disables use of indirect ring descriptors. Other valid values are Enable , which enables indirect ring descriptor usage; and Enable* , which enables conditional use of indirect ring descriptors. Init.MaxTxBuffers An integer that represents the amount of TX ring descriptors that will be allocated. The default value is 1024 . Valid values are: 16, 32, 64, 128, 256, 512, or 1024. Init.MaxRxBuffers An integer that represents the amount of RX ring descriptors that will be allocated. The default value is 256 . Valid values are: 16, 32, 64, 128, 256, 512, or 1024. Offload.Tx.Checksum Specifies the TX checksum offloading mode. In Red Hat Enterprise Linux 6.4 and onward, the valid values for this parameter are All (the default), which enables IP, TCP and UDP checksum offloading for both IPv4 and IPv6; TCP/UDP(v4,v6) , which enables TCP and UDP checksum offloading for both IPv4 and IPv6; TCP/UDP(v4) , which enables TCP and UDP checksum offloading for IPv4 only; and TCP(v4) , which enables only TCP checksum offloading for IPv4 only. In Red Hat Enterprise Linux 6.3 and earlier, the valid values for this parameter are TCP/UDP (the default value), which enables TCP and UDP checksum offload; TCP , which enables only TCP checksum offload; or Disable , which disables TX checksum offload. Offload.Tx.LSO A Boolean value that enables TX TCP Large Segment Offload (LSO). The default value is 1 (enabled). Offload.Rx.Checksum Specifies the RX checksum offloading mode. In Red Hat Enterprise Linux 6.4 and onward, the valid values for this parameter are All (the default), which enables IP, TCP and UDP checksum offloading for both IPv4 and IPv6; TCP/UDP(v4,v6) , which enables TCP and UDP checksum offloading for both IPv4 and IPv6; TCP/UDP(v4) , which enables TCP and UDP checksum offloading for IPv4 only; and TCP(v4) , which enables only TCP checksum offloading for IPv4 only. In Red Hat Enterprise Linux 6.3 and earlier, the valid values are Disable (the default), which disables RX checksum offloading; All , which enables TCP, UDP, and IP checksum offloading; TCP/UDP , which enables TCP and UDP checksum offloading; and TCP , which enables only TCP checksum offloading. Test and debug parameters Important Test and debug parameters should only be used for testing or debugging; they should not be used in production. TestOnly.DelayConnect(ms) The period for which to delay connection upon startup, in milliseconds. The default value is 0 . TestOnly.DPCChecking Sets the DPC checking mode. 0 (the default) disables DPC checking. 1 enables DPC checking; each hang test verifies DPC activity and acts as if the DPC was spawned. 2 clears the device interrupt status and is otherwise identical to 1 . TestOnly.Scatter-Gather A Boolean value that determines whether scatter-gather functionality is enabled. The default value is 1 (enabled). Setting this value to 0 disables scatter-gather functionality and all dependent capabilities. TestOnly.InterruptRecovery A Boolean value that determines whether interrupt recovery is enabled. The default value is 1 (enabled). TestOnly.PacketFilter A Boolean value that determines whether packet filtering is enabled. The default value is 1 (enabled). TestOnly.BatchReceive A Boolean value that determines whether packets are received in batches, or singularly. The default value is 1 , which enables batched packet receipt. TestOnly.Promiscuous A Boolean value that determines whether promiscuous mode is enabled. The default value is 0 (disabled). TestOnly.AnalyzeIPPackets A Boolean value that determines whether the checksum fields of outgoing IP packets are tested and verified for debugging purposes. The default value is 0 (no checking). TestOnly.RXThrottle An integer that determines the number of receive packets handled in a single DPC. The default value is 1000 . TestOnly.UseSwTxChecksum A Boolean value that determines whether hardware checksumming is enabled. The default value is 0 (disabled). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/netkvm-parameters |
B.32. java-1.6.0-ibm | B.32. java-1.6.0-ibm B.32.1. RHSA-2011:0357 - Critical: java-1.6.0-ibm security update Updated java-1.6.0-ibm packages that fix several security issues are now available for Red Hat Enterprise Linux 4 Extras, and Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IBM 1.6.0 Java release includes the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. CVE-2010-4422 , CVE-2010-4447 , CVE-2010-4448 , CVE-2010-4452 , CVE-2010-4454 , CVE-2010-4462 , CVE-2010-4463 , CVE-2010-4465 , CVE-2010-4466 , CVE-2010-4467 , CVE-2010-4468 , CVE-2010-4471 , CVE-2010-4473 , CVE-2010-4475 This update fixes several vulnerabilities in the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. Detailed vulnerability descriptions are linked from the IBM "Security alerts" page. Note: The RHSA-2010:0987 and RHSA-2011:0290 java-1.6.0-ibm errata were missing 64-bit PowerPC packages for Red Hat Enterprise Linux 4 Extras. This erratum provides 64-bit PowerPC packages for Red Hat Enterprise Linux 4 Extras as expected. All users of java-1.6.0-ibm are advised to upgrade to these updated packages, containing the IBM 1.6.0 SR9-FP1 Java release. All running instances of IBM Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/java-1_6_0-ibm |
Chapter 33. JSLT | Chapter 33. JSLT Since Camel 3.1 Only producer is supported The JSLT component allows you to process a JSON messages using an JSLT expression. This can be ideal when doing JSON to JSON transformation or querying data. Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jslt</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency> 33.1. URI format Where specName is the classpath-local URI of the specification to invoke; or the complete URL of the remote specification (eg: file://folder/myfile.vm ). 33.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 33.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 33.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 33.2.3. Component Options The JSLT component supports 5 options, which are listed below. Name Description Default Type allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean functions (advanced) JSLT can be extended by plugging in functions written in Java. Collection objectFilter (advanced) JSLT can be extended by plugging in a custom jslt object filter. JsonFilter 33.2.4. Endpoint Options The JSLT endpoint is configured using URI syntax: with the following path and query parameters: 33.2.4.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 33.2.4.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean contentCache (producer) Sets whether to use resource content cache or not. false boolean mapBigDecimalAsFloats (producer) If true, the mapper will use the USE_BIG_DECIMAL_FOR_FLOATS in serialization features. false boolean objectMapper (producer) Setting a custom JSON Object Mapper to be used. ObjectMapper prettyPrint (common) If true, JSON in output message is pretty printed. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 33.3. Message Headers The JSLT component supports 2 message header(s), which is/are listed below: Name Description Default Type CamelJsltString (producer) Constant: HEADER_JSLT_STRING The JSLT Template as String. String CamelJsltResourceUri (producer) Constant: HEADER_JSLT_RESOURCE_URI The resource URI. String 33.4. Passing values to JSLT Camel can supply exchange information as variables when applying a JSLT expression on the body. The available variables from the Exchange are: name value headers The headers of the In message as a json object exchange.properties The Exchange properties as a json object. exchange is the name of the variable and properties is the path to the exchange properties. Available if allowContextMapAll option is true. All the values that cannot be converted to json with Jackson are denied and will not be available in the jslt expression. For example, the header named "type" and the exchange property "instance" can be accessed like { "type": USDheaders.type, "instance": USDexchange.properties.instance } 33.5. Samples The sample example is as given below. from("activemq:My.Queue"). to("jslt:com/acme/MyResponse.json"); And a file based resource: from("activemq:My.Queue"). to("jslt:file://myfolder/MyResponse.json?contentCache=true"). to("activemq:Another.Queue"); You can also specify which JSLT expression the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelJsltResourceUri").constant("path/to/my/spec.json"). to("jslt:dummy?allowTemplateFromHeader=true"); Or send whole jslt expression via header: (suitable for querying) from("direct:in"). setHeader("CamelJsltString").constant(".published"). to("jslt:dummy?allowTemplateFromHeader=true"); Passing exchange properties to the jslt expression can be done like this from("direct:in"). to("jslt:com/acme/MyResponse.json?allowContextMapAll=true"); 33.6. Spring Boot Auto-Configuration When using jslt with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency> The component supports 6 options, which are listed below. Name Description Default Type camel.component.jslt.allow-template-from-header Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false Boolean camel.component.jslt.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jslt.enabled Whether to enable auto configuration of the jslt component. This is enabled by default. Boolean camel.component.jslt.functions JSLT can be extended by plugging in functions written in Java. Collection camel.component.jslt.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jslt.object-filter JSLT can be extended by plugging in a custom jslt object filter. The option is a com.schibsted.spt.data.jslt.filters.JsonFilter type. JsonFilter | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jslt</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency>",
"jslt:specName[?options]",
"jslt:resourceUri",
"{ \"type\": USDheaders.type, \"instance\": USDexchange.properties.instance }",
"from(\"activemq:My.Queue\"). to(\"jslt:com/acme/MyResponse.json\");",
"from(\"activemq:My.Queue\"). to(\"jslt:file://myfolder/MyResponse.json?contentCache=true\"). to(\"activemq:Another.Queue\");",
"from(\"direct:in\"). setHeader(\"CamelJsltResourceUri\").constant(\"path/to/my/spec.json\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");",
"from(\"direct:in\"). setHeader(\"CamelJsltString\").constant(\".published\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");",
"from(\"direct:in\"). to(\"jslt:com/acme/MyResponse.json?allowContextMapAll=true\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-jslt-component-starter |
Chapter 8. alarming | Chapter 8. alarming This chapter describes the commands under the alarming command. 8.1. alarming capabilities list List capabilities of alarming service Usage: Table 8.1. Command arguments Value Summary -h, --help Show this help message and exit Table 8.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 8.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 8.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 8.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack alarming capabilities list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/alarming |
Chapter 14. Using the Stream Control Transmission Protocol (SCTP) | Chapter 14. Using the Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a bare-metal cluster. 14.1. Support for SCTP on OpenShift Container Platform As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value. 14.1.1. Example configurations using SCTP protocol You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object. In the following example, a pod is configured to use SCTP: apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ... ports: - containerPort: 30100 name: sctpserver protocol: SCTP In the following example, a service is configured to use SCTP: apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ... ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80 14.2. Enabling Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a file named load-sctp-module.yaml that contains the following YAML definition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp To create the MachineConfig object, enter the following command: USD oc create -f load-sctp-module.yaml Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to Ready , the configuration update is applied. USD oc get nodes 14.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service. Prerequisites Access to the internet from the cluster to install the nc package. Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a pod starts an SCTP listener: Create a file named sctp-server.yaml that defines a pod with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP Create the pod by entering the following command: USD oc create -f sctp-server.yaml Create a service for the SCTP listener pod. Create a file named sctp-service.yaml that defines a service with the following YAML: apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102 To create the service, enter the following command: USD oc create -f sctp-service.yaml Create a pod for the SCTP client. Create a file named sctp-client.yaml with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] To create the Pod object, enter the following command: USD oc apply -f sctp-client.yaml Run an SCTP listener on the server. To connect to the server pod, enter the following command: USD oc rsh sctpserver To start the SCTP listener, enter the following command: USD nc -l 30102 --sctp Connect to the SCTP listener on the server. Open a new terminal window or tab in your terminal program. Obtain the IP address of the sctpservice service. Enter the following command: USD oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' To connect to the client pod, enter the following command: USD oc rsh sctpclient To start the SCTP client, enter the following command. Replace <cluster_IP> with the cluster IP address of the sctpservice service. # nc <cluster_IP> 30102 --sctp | [
"apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP",
"apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp",
"oc create -f load-sctp-module.yaml",
"oc get nodes",
"apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP",
"oc create -f sctp-server.yaml",
"apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102",
"oc create -f sctp-service.yaml",
"apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]",
"oc apply -f sctp-client.yaml",
"oc rsh sctpserver",
"nc -l 30102 --sctp",
"oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'",
"oc rsh sctpclient",
"nc <cluster_IP> 30102 --sctp"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/using-sctp |
Chapter 5. Ensuring that data displays correctly in the Fuse Console | Chapter 5. Ensuring that data displays correctly in the Fuse Console If the display of the queues and connections in the Fuse Console is missing queues, missing connections, or displaying inconsistent icons, adjust the Jolokia collection size parameter that specifies the maximum number of elements in an array that Jolokia marshals in a response. Procedure In the upper right corner of the Fuse Console, click the user icon and then click Preferences . Increase the value of the Maximum collection size option (the default is 50,000). Click Close . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/fuse-console-data-display-all_eap |
Using the Streams for Apache Kafka Bridge | Using the Streams for Apache Kafka Bridge Red Hat Streams for Apache Kafka 2.7 Use the Streams for Apache Kafka Bridge to connect with a Kafka cluster | [
"Content-Type: application/vnd.kafka.v2+json",
"{ \"name\": \"my-consumer\", \"format\": \"binary\", 1 # }",
"{ \"records\": [ { \"key\": \"my-key\", \"value\": \"ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ=\" }, ] }",
"curl -X POST http://localhost:8080/topics/my-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\", \"partition\": 2, \"headers\": [ { \"key\": \"key1\", \"value\": \"QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==\" 1 } ] } ] }'",
"Accept: application/vnd.kafka. EMBEDDED-DATA-FORMAT .v2+json",
"Accept: application/vnd.kafka.json.v2+json",
"http.cors.enabled=true http.cors.allowedOrigins=http://my-web-application.io http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH",
"Origin: http://my-web-application.io",
"curl -v -X GET HTTP-BRIDGE-ADDRESS /consumers/my-group/instances/my-consumer/records -H 'Origin: http://my-web-application.io' -H 'content-type: application/vnd.kafka.v2+json'",
"HTTP/1.1 200 OK Access-Control-Allow-Origin: * 1",
"OPTIONS /my-group/instances/my-consumer/subscription HTTP/1.1 Origin: http://my-web-application.io Access-Control-Request-Method: POST 1 Access-Control-Request-Headers: Content-Type 2",
"curl -v -X OPTIONS -H 'Origin: http://my-web-application.io' -H 'Access-Control-Request-Method: POST' -H 'content-type: application/vnd.kafka.v2+json'",
"HTTP/1.1 200 OK Access-Control-Allow-Origin: http://my-web-application.io Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS,PATCH Access-Control-Allow-Headers: content-type",
"curl -v -X POST HTTP-BRIDGE-ADDRESS /topics/bridge-topic -H 'Origin: http://my-web-application.io' -H 'content-type: application/vnd.kafka.v2+json'",
"HTTP/1.1 200 OK Access-Control-Allow-Origin: http://my-web-application.io",
"logger.healthy.name = http.openapi.operation.healthy logger.healthy.level = WARN logger.ready.name = http.openapi.operation.ready logger.ready.level = WARN",
"logger. <operation_id> .name = http.openapi.operation. <operation_id> logger. <operation_id>_level = _<LOG_LEVEL>",
"http.host=0.0.0.0 http.port=8080",
"./bin/kafka_bridge_run.sh --config-file= <path> /application.properties",
"HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic bridge-quickstart-topic --partitions 3 --replication-factor 1",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic bridge-quickstart-topic",
"curl -X POST http://localhost:8080/topics/bridge-quickstart-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" }, { \"value\": \"sales-lead-0002\", \"partition\": 2 }, { \"value\": \"sales-lead-0003\" } ] }'",
"# { \"offsets\":[ { \"partition\":0, \"offset\":0 }, { \"partition\":2, \"offset\":0 }, { \"partition\":0, \"offset\":1 } ] }",
"curl -X GET http://localhost:8080/topics",
"[ \"__strimzi_store_topic\", \"__strimzi-topic-operator-kstreams-topic-store-changelog\", \"bridge-quickstart-topic\", \"my-topic\" ]",
"curl -X GET http://localhost:8080/topics/bridge-quickstart-topic",
"{ \"name\": \"bridge-quickstart-topic\", \"configs\": { \"compression.type\": \"producer\", \"leader.replication.throttled.replicas\": \"\", \"min.insync.replicas\": \"1\", \"message.downconversion.enable\": \"true\", \"segment.jitter.ms\": \"0\", \"cleanup.policy\": \"delete\", \"flush.ms\": \"9223372036854775807\", \"follower.replication.throttled.replicas\": \"\", \"segment.bytes\": \"1073741824\", \"retention.ms\": \"604800000\", \"flush.messages\": \"9223372036854775807\", \"message.format.version\": \"2.8-IV1\", \"max.compaction.lag.ms\": \"9223372036854775807\", \"file.delete.delay.ms\": \"60000\", \"max.message.bytes\": \"1048588\", \"min.compaction.lag.ms\": \"0\", \"message.timestamp.type\": \"CreateTime\", \"preallocate\": \"false\", \"index.interval.bytes\": \"4096\", \"min.cleanable.dirty.ratio\": \"0.5\", \"unclean.leader.election.enable\": \"false\", \"retention.bytes\": \"-1\", \"delete.retention.ms\": \"86400000\", \"segment.ms\": \"604800000\", \"message.timestamp.difference.max.ms\": \"9223372036854775807\", \"segment.index.bytes\": \"10485760\" }, \"partitions\": [ { \"partition\": 0, \"leader\": 0, \"replicas\": [ { \"broker\": 0, \"leader\": true, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 1, \"leader\": 2, \"replicas\": [ { \"broker\": 2, \"leader\": true, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 2, \"leader\": 1, \"replicas\": [ { \"broker\": 1, \"leader\": true, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true } ] } ] }",
"curl -X GET http://localhost:8080/topics/bridge-quickstart-topic/partitions",
"[ { \"partition\": 0, \"leader\": 0, \"replicas\": [ { \"broker\": 0, \"leader\": true, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 1, \"leader\": 2, \"replicas\": [ { \"broker\": 2, \"leader\": true, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 2, \"leader\": 1, \"replicas\": [ { \"broker\": 1, \"leader\": true, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true } ] } ]",
"curl -X GET http://localhost:8080/topics/bridge-quickstart-topic/partitions/0",
"{ \"partition\": 0, \"leader\": 0, \"replicas\": [ { \"broker\": 0, \"leader\": true, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true } ] }",
"curl -X GET http://localhost:8080/topics/bridge-quickstart-topic/partitions/0/offsets",
"{ \"beginning_offset\": 0, \"end_offset\": 1 }",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"name\": \"bridge-quickstart-consumer\", \"auto.offset.reset\": \"earliest\", \"format\": \"json\", \"enable.auto.commit\": false, \"fetch.min.bytes\": 512, \"consumer.request.timeout.ms\": 30000 }'",
"# { \"instance_id\": \"bridge-quickstart-consumer\", \"base_uri\":\"http:// <bridge_id> -bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer\" }",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"topics\": [ \"bridge-quickstart-topic\" ] }'",
"curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'",
"HTTP/1.1 200 OK content-type: application/vnd.kafka.json.v2+json # [ { \"topic\":\"bridge-quickstart-topic\", \"key\":\"my-key\", \"value\":\"sales-lead-0001\", \"partition\":0, \"offset\":0 }, { \"topic\":\"bridge-quickstart-topic\", \"key\":null, \"value\":\"sales-lead-0003\", \"partition\":0, \"offset\":1 }, #",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"offsets\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0, \"offset\": 2 } ] }'",
"curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'",
"curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"partitions\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0 } ] }'",
"curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer",
"bridge.id=my-bridge http.host=0.0.0.0 http.port=8080 1 http.cors.enabled=true 2 http.cors.allowedOrigins=https://strimzi.io 3 http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH 4",
"KAFKA_BRIDGE_METRICS_ENABLED=true",
"./bin/kafka_bridge_run.sh --config-file=<path>/application.properties",
"bridge.tracing=opentelemetry 1",
"OTEL_SERVICE_NAME=my-tracing-service 1 OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 2",
"./bin/kafka_bridge_run.sh --config-file= <path> /application.properties",
"OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2",
"{ \"bridge_version\" : \"0.16.0\" }",
"{ \"name\" : \"consumer1\", \"format\" : \"binary\", \"auto.offset.reset\" : \"earliest\", \"enable.auto.commit\" : false, \"fetch.min.bytes\" : 512, \"consumer.request.timeout.ms\" : 30000, \"isolation.level\" : \"read_committed\" }",
"{ \"instance_id\" : \"consumer1\", \"base_uri\" : \"http://localhost:8080/consumers/my-group/instances/consumer1\" }",
"{ \"error_code\" : 409, \"message\" : \"A consumer instance with the specified name already exists in the Kafka Bridge.\" }",
"{ \"error_code\" : 422, \"message\" : \"One or more consumer configuration options have invalid values.\" }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"partitions\" : [ { \"topic\" : \"topic\", \"partition\" : 0 }, { \"topic\" : \"topic\", \"partition\" : 1 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 409, \"message\" : \"Subscriptions to topics, partitions, and patterns are mutually exclusive.\" }",
"{ \"offsets\" : [ { \"topic\" : \"topic\", \"partition\" : 0, \"offset\" : 15 }, { \"topic\" : \"topic\", \"partition\" : 1, \"offset\" : 42 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"offsets\" : [ { \"topic\" : \"topic\", \"partition\" : 0, \"offset\" : 15 }, { \"topic\" : \"topic\", \"partition\" : 1, \"offset\" : 42 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"partitions\" : [ { \"topic\" : \"topic\", \"partition\" : 0 }, { \"topic\" : \"topic\", \"partition\" : 1 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"partitions\" : [ { \"topic\" : \"topic\", \"partition\" : 0 }, { \"topic\" : \"topic\", \"partition\" : 1 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"[ { \"topic\" : \"topic\", \"key\" : \"key1\", \"value\" : { \"foo\" : \"bar\" }, \"partition\" : 0, \"offset\" : 2 }, { \"topic\" : \"topic\", \"key\" : \"key2\", \"value\" : [ \"foo2\", \"bar2\" ], \"partition\" : 1, \"offset\" : 3 } ]",
"[ { \"topic\": \"test\", \"key\": \"a2V5\", \"value\": \"Y29uZmx1ZW50\", \"partition\": 1, \"offset\": 100, }, { \"topic\": \"test\", \"key\": \"a2V5\", \"value\": \"a2Fma2E=\", \"partition\": 2, \"offset\": 101, } ]",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 406, \"message\" : \"The `format` used in the consumer creation request does not match the embedded format in the Accept header of this request.\" }",
"{ \"error_code\" : 422, \"message\" : \"Response exceeds the maximum number of bytes the consumer can receive\" }",
"{ \"topics\" : [ \"topic1\", \"topic2\" ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 409, \"message\" : \"Subscriptions to topics, partitions, and patterns are mutually exclusive.\" }",
"{ \"error_code\" : 422, \"message\" : \"A list (of Topics type) or a topic_pattern must be specified.\" }",
"{ \"topics\" : [ \"my-topic1\", \"my-topic2\" ], \"partitions\" : [ { \"my-topic1\" : [ 1, 2, 3 ] }, { \"my-topic2\" : [ 1 ] } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"[ \"topic1\", \"topic2\" ]",
"{ \"records\" : [ { \"key\" : \"key1\", \"value\" : \"value1\" }, { \"value\" : \"value2\", \"partition\" : 1 }, { \"value\" : \"value3\" } ] }",
"{ \"offsets\" : [ { \"partition\" : 2, \"offset\" : 0 }, { \"partition\" : 1, \"offset\" : 1 }, { \"partition\" : 2, \"offset\" : 2 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic was not found.\" }",
"{ \"error_code\" : 422, \"message\" : \"The record list contains invalid records.\" }",
"{ \"name\" : \"topic\", \"offset\" : 2, \"configs\" : { \"cleanup.policy\" : \"compact\" }, \"partitions\" : [ { \"partition\" : 1, \"leader\" : 1, \"replicas\" : [ { \"broker\" : 1, \"leader\" : true, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : false, \"in_sync\" : true } ] }, { \"partition\" : 2, \"leader\" : 2, \"replicas\" : [ { \"broker\" : 1, \"leader\" : false, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : true, \"in_sync\" : true } ] } ] }",
"[ { \"partition\" : 1, \"leader\" : 1, \"replicas\" : [ { \"broker\" : 1, \"leader\" : true, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : false, \"in_sync\" : true } ] }, { \"partition\" : 2, \"leader\" : 2, \"replicas\" : [ { \"broker\" : 1, \"leader\" : false, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : true, \"in_sync\" : true } ] } ]",
"{ \"error_code\" : 404, \"message\" : \"The specified topic was not found.\" }",
"{ \"records\" : [ { \"key\" : \"key1\", \"value\" : \"value1\" }, { \"value\" : \"value2\" } ] }",
"{ \"offsets\" : [ { \"partition\" : 2, \"offset\" : 0 }, { \"partition\" : 1, \"offset\" : 1 }, { \"partition\" : 2, \"offset\" : 2 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic partition was not found.\" }",
"{ \"error_code\" : 422, \"message\" : \"The record is not valid.\" }",
"{ \"partition\" : 1, \"leader\" : 1, \"replicas\" : [ { \"broker\" : 1, \"leader\" : true, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : false, \"in_sync\" : true } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic partition was not found.\" }",
"{ \"beginning_offset\" : 10, \"end_offset\" : 50 }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic partition was not found.\" }",
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/using_the_streams_for_apache_kafka_bridge/index |
Deploying OpenShift Data Foundation using bare metal infrastructure | Deploying OpenShift Data Foundation using bare metal infrastructure Red Hat OpenShift Data Foundation 4.18 Instructions on deploying OpenShift Data Foundation using local storage on bare metal infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on bare metal infrastructure. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) bare metal clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on bare metal. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS) follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption, refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your vault servers. After you have addressed the above, perform the following steps: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on bare metal . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Compact mode requirements You can install OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. To configure OpenShift Container Platform in compact mode, see the Configuring a three-node cluster section of the Installing guide in OpenShift Container Platform documentation, and Delivering a Three-node Architecture for Edge Deployments . Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on bare metal infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation cluster on bare metal . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.3. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.5. Creating OpenShift Data Foundation cluster on bare metal Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. Ensure that the disk type is SSD, which is the only supported disk type. If you want to use the multi network plug-in (Multus), before deployment you must create network attachment definitions (NADs) that is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use Ceph RBD as the default StorageClass . This avoids having to manually annotate a StorageClass. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . The local volume set name appears as the default value for the storage class name. You can change the name. Select one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected as the default value. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources . Verify that the Status of the StorageCluster is Ready and has a green tick mark to it. To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled: To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. 2.6. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verify the Multus networking . 2.6.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) 2.6.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.6.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.6.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 2.6.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output: Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'",
"{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'",
"[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'",
"openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public",
"oc annotate namespace openshift-storage openshift.io/node-selector="
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_using_bare_metal_infrastructure/index |
7.4. Deleting a Template | 7.4. Deleting a Template If you have used a template to create a virtual machine using the thin provisioning storage allocation option, the template cannot be deleted as the virtual machine needs it to continue running. However, cloned virtual machines do not depend on the template they were cloned from and the template can be deleted. Deleting a Template Click Compute Templates and select a template. Click Remove . Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/deleting_a_template |
Chapter 9. Scanning pod images with the Container Security Operator | Chapter 9. Scanning pod images with the Container Security Operator The Container Security Operator (CSO) is an addon for the Clair security scanner available on OpenShift Container Platform and other Kubernetes platforms. With the CSO, users can scan container images associated with active pods for known vulnerabilities. Note The CSO does not work without Red Hat Quay and Clair. The Container Security Operator (CSO) includes the following features: Watches containers associated with pods on either specified or all namespaces. Queries the container registry where the containers came from for vulnerability information, provided that an image's registry supports image scanning, such a a Red Hat Quay registry with Clair scanning. Exposes vulnerabilities through the ImageManifestVuln object in the Kubernetes API. Note To see instructions on installing the CSO on Kubernetes, select the Install button from the Container Security OperatorHub.io page. 9.1. Downloading and running the Container Security Operator in OpenShift Container Platform Use the following procedure to download the Container Security Operator (CSO). Note In the following procedure, the CSO is installed in the marketplace-operators namespace. This allows the CSO to be used in all namespaces of your OpenShift Container Platform cluster. Procedure On the OpenShift Container Platform console page, select Operators OperatorHub and search for Container Security Operator . Select the Container Security Operator, then select Install to go to the Create Operator Subscription page. Check the settings (all namespaces and automatic approval strategy, by default), and select Subscribe . The Container Security appears after a few moments on the Installed Operators screen. Optional: you can add custom certificates to the CSO. In this example, create a certificate named quay.crt in the current directory. Then, run the following command to add the certificate to the CSO: USD oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators Note You must restart the Operator pod for the new certificates to take effect. Navigate to Home Overview . A link to Image Vulnerabilities appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a security breakdown, as shown in the following image: Important The Container Security Operator currently provides broken links for Red Hat Security advisories. For example, the following link might be provided: https://access.redhat.com/errata/RHSA-2023:1842%20https://access.redhat.com/security/cve/CVE-2023-23916 . The %20 in the URL represents a space character, however it currently results in the combination of the two URLs into one incomplete URL, for example, https://access.redhat.com/errata/RHSA-2023:1842 and https://access.redhat.com/security/cve/CVE-2023-23916 . As a temporary workaround, you can copy each URL into your browser to navigate to the proper page. This is a known issue and will be fixed in a future version of Red Hat Quay. You can do one of two things at this point to follow up on any detected vulnerabilities: Select the link to the vulnerability. You are taken to the container registry, Red Hat Quay or other registry where the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry: Select the namespaces link to go to the Image Manifest Vulnerabilities page, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in two namespaces: After executing this procedure, you are made aware of what images are vulnerable, what you must do to fix those vulnerabilities, and every namespace that the image was run in. Knowing this, you can perform the following actions: Alert users who are running the image that they need to correct the vulnerability. Stop the images from running by deleting the deployment or the object that started the pod that the image is in. Note If you delete the pod, it might take a few minutes for the vulnerability to reset on the dashboard. 9.2. Querying image vulnerabilities from the CLI Use the following procedure to query image vulnerabilities from the command line interface (CLI). Procedure Enter the following command to query for detected vulnerabilities: USD oc get vuln --all-namespaces Example output NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s Optional. To display details for a particular vulnerability, identify a specific vulnerability and its namespace, and use the oc describe command. The following example shows an active container whose image includes an RPM package with a vulnerability: USD oc describe vuln --namespace <namespace> sha256.ac50e3752... Example output Name: sha256.ac50e3752... Namespace: quay-enterprise ... Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries... 9.3. Uninstalling the Container Security Operator To uninstall the Container Security Operator from your OpenShift Container Platform deployment, you must uninstall the Operator and delete the imagemanifestvulns.secscan.quay.redhat.com custom resource definition (CRD). Without removing the CRD, image vulnerabilities are still reported on the OpenShift Container Platform Overview page. Procedure On the OpenShift Container Platform web console, click Operators Installed Operators . Click the menu kebab of the Container Security Operator. Click Uninstall Operator . Confirm your decision by clicking Uninstall in the popup window. Remove the imagemanifestvulns.secscan.quay.redhat.com custom resource definition by entering the following command: USD oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com Example output customresourcedefinition.apiextensions.k8s.io "imagemanifestvulns.secscan.quay.redhat.com" deleted | [
"oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators",
"oc get vuln --all-namespaces",
"NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s",
"oc describe vuln --namespace <namespace> sha256.ac50e3752",
"Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries",
"oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com",
"customresourcedefinition.apiextensions.k8s.io \"imagemanifestvulns.secscan.quay.redhat.com\" deleted"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_operator_features/container-security-operator-setup |
Release notes for Red Hat build of OpenJDK 17.0.7 | Release notes for Red Hat build of OpenJDK 17.0.7 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.7/index |
9.2. Stable Device Addresses in Red Hat Virtualization | 9.2. Stable Device Addresses in Red Hat Virtualization Virtual hardware PCI address allocations are persisted in the ovirt-engine database. PCI addresses are allocated by QEMU at virtual machine creation time, and reported to VDSM by libvirt . VDSM reports them back to the Manager, where they are stored in the ovirt-engine database. When a virtual machine is started, the Manager sends VDSM the device address out of the database. VDSM passes them to libvirt which starts the virtual machine using the PCI device addresses that were allocated when the virtual machine was run for the first time. When a device is removed from a virtual machine, all references to it, including the stable PCI address, are also removed. If a device is added to replace the removed device, it is allocated a PCI address by QEMU , which is unlikely to be the same as the device it replaced. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/stable_device_addresses_in_red_hat_enterprise_virtualization |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/making-open-source-more-inclusive |
Configuring the Red Hat High Availability Add-On with Pacemaker | Configuring the Red Hat High Availability Add-On with Pacemaker Red Hat Enterprise Linux 6 Reference Document for the High Availability Add-On for Red Hat Enterprise Linux 6 Steven Levine Red Hat Customer Content Services [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/index |
Chapter 2. Enabling host communication with Insights | Chapter 2. Enabling host communication with Insights Before you can execute playbooks on remote systems from Red Hat Insights for Red Hat Enterprise Linux, your systems have to be able to communicate with Red Hat Insights. For Red Hat Enterprise Linux systems that are not managed by Red Hat Satellite , you should follow the procedure below to enable the rhc client on those systems. For systems that are managed by Satellite, you will configure Cloud Connector on the host servers for those systems. :context: host-communication-with-insights 2.1. Enabling the rhc client on systems directly managed by Insights To be able to execute remediation playbooks from Insights for Red Hat Enterprise Linux, the rhc client must be enabled on the systems in your infrastructure. The rhc connect command does this by registering (RHEL8.6 and later, and 9.0 and later) systems with Red Hat Subscription Manager and Red Hat Insights, and enabling remote host configuration (rhc) features in Insights for Red Hat Enterprise Linux. Prerequisites Sudo access on the Red Hat Enterprise Linux host system Connect rhc on RHEL8.5 systems Remote host configuration on RHEL 8.5 has dependencies of ansible and rhc-worker-playbook . To install the dependencies, you must first register with Subscription Manager. Use the following commands to enable rhc on RHEL 8.5 systems. [root]# subscription-manager repos --enable ansible-2.9-for-rhel-8-x86_64-rpms [root]# dnf -y install ansible rhc-worker-playbook-0.1.5-3.el8_4 [root]# rhc connect Connect rhc on RHEL8.6 and later systems Use the following commands to enable rhc on RHEL8.6 and later systems. [root]# dnf -y update rhc [root]# dnf -y install rhc-worker-playbook [root]# rhc connect Connect rhc on RHEL9.0 and later systems Use the following commands to enable rhc on RHEL9.0 and later systems. [root]# dnf -y install rhc rhc-worker-playbook [root]# rhc connect Additional resources After enabling rhc, you can manage the configuration at Red Hat Hybrid Cloud Console > Red Hat Insights > Inventory > System Configuration > Remote Host Configuration (RHC) . For complete rhc documentation, see Remote Host Configuration and Management . 2.2. Enabling Cloud Connector for content hosts managed by Satellite You can remediate issues on Satellite-managed content hosts remotely from the Insights for Red Hat Enterprise Linux user interface in the Red Hat Hybrid Cloud Console. Remote remediation from Insights requires that your first configure the Cloud Connector plugin on the Satellite Server. Important If you want to manage and execute host remediations completely from the Satellite web console, then you do not need to enable the Cloud Connector plugin. The following prerequisites are comprehensive for Satellite Server configuration: Prerequisites Satellite must be version 6.9 or later. You have root access to the Satellite server. The content hosts that are managaged by the satellite should have the Insights client installed and turned on. See the reference section of this documentation for Insights client installation and enablement procedures. Import a Subscription Manifest into Satellite. For more information, see Importing a Subscription Manifest into Satellite Server in the Red Hat Satellite Content Management Guide . Register your hosts to Satellite using an activation key to attach Red Hat subscriptions. For more information, see Registering Hosts in the Red Hat Satellite Managing Hosts guide. 2.2.1. Configuring Cloud Connector and uploading your Satellite Server content host inventory to Red Hat Insights Before you can run remediation playbooks remotely from Insights, you must install and configure the Cloud Connector plugin on Satellite Server. Perform the following tasks to install, configure, and verify the configuration of Cloud Connector. Procedure On Satellite Server, enable the remote-execution plugin by entering one of the following commands, based on your version of Satellite Server. On Satellite Server 6.12 and newer On Satellite Server 6.9 - 6.11 Note Configuring Cloud Connector requires that the satellite perform a remote execution on itself. This is why the first step is to enable the remote-execution script or plugin. In the Satellite Server web UI, navigate to Configure > Red Hat Cloud > Inventory Upload . Verify that the Automatic Inventory Upload switch is turned ON , which is the default setting. Optionally: Toggle the Obfuscate host names switch to the ON position to hide host names that Satellite Server reports to the Hybrid Cloud Console. Note The Obfuscate host names setting only affects rh_cloud reports. If you want to obfuscate hostnames and IP addresses, you should set obfuscation in the Insights client configuration. Satellite knows how to read this configuration, and will follow along. See Client Configuration Guide for Red Hat Insights sections, Obfuscating the host name and Obfuscating the IPv4 address . Automatic inventory upload and Obfuscate host names are global settings. They affect content hosts that belong to all organizations. Click Configure Cloud Connector . A Notice dialog box warns you that this action also enables auto reports upload. Click Confirm . Wait for the task to finish. This should take about one minute. Go to Monitor > Jobs > Configure Cloud Connector to see the job. Eventually, you will see the satellite in Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Integrations , in the Red Hat tab. Allow up to one hour after the job is visible in the Satellite web console. + The bottom of the Inventory Uploads page shows the name of your organization; hovering over it will turn the area grey. Clicking on the name will cause it to expand, showing a Generating tab and Uploading tab where one can monitor the progress of the upload. . Click Restart to generate a data payload from each of the content hosts that have Insights client running, and upload your host inventory to Insights for Red Hat Enterprise Linux. + Repeat this step, clicking Restart for each organization for which you want to upload a content host inventory. . Set Auto sync for the organization under Configure > Red Hat Cloud (after Sat 6.11) > Insights using the toggle in the upper right of the screen. Verification To verify that the upload was successful, log into Red Hat Hybrid Cloud Console > Red Hat Enterprise Linux > Red Hat Insights > Inventory and search for the satellite_id tag for your content hosts. Optionally, push the Sync inventory status button and wait for the task to finish. It will show you the number of content hosts recognized by Insights inventory. | [
"subscription-manager repos --enable ansible-2.9-for-rhel-8-x86_64-rpms dnf -y install ansible rhc-worker-playbook-0.1.5-3.el8_4 rhc connect",
"dnf -y update rhc dnf -y install rhc-worker-playbook rhc connect",
"dnf -y install rhc rhc-worker-playbook rhc connect",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-install-key true",
"satellite-installer --foreman-proxy-plugin-remote-execution-ssh-install-key true"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide/host-communication-with-insights_red-hat-insights-remediation-guide |
Chapter 3. Configuring routed spine-leaf in the undercloud | Chapter 3. Configuring routed spine-leaf in the undercloud This section describes a use case about how to configure the undercloud to accommodate routed spine-leaf with composable networks. 3.1. Configuring the spine leaf provisioning networks To configure the provisioning networks for your spine leaf infrastructure, edit the undercloud.conf file and set the relevant parameters included in the following procedure. Procedure Log in to the undercloud as the stack user. If you do not already have an undercloud.conf file, copy the sample template file: Edit the undercloud.conf file. Set the following values in the [DEFAULT] section: Set local_ip to the undercloud IP on leaf0 : Set undercloud_public_host to the externally facing IP address of the undercloud: Set undercloud_admin_host to the administration IP address of the undercloud. This IP address is usually on leaf0: Set local_interface to the interface to bridge for the local network: Set enable_routed_networks to true : Define your list of subnets using the subnets parameter. Define one subnet for each L2 segment in the routed spine and leaf: Specify the subnet associated with the physical L2 segment local to the undercloud using the local_subnet parameter: Set the value of undercloud_nameservers . Tip You can find the current IP addresses of the DNS servers that are used for the undercloud nameserver by looking in /etc/resolv.conf. Create a new section for each subnet that you define in the subnets parameter: Save the undercloud.conf file. Run the undercloud installation command: This configuration creates three subnets on the provisioning network or control plane. The overcloud uses each network to provision systems within each respective leaf. To ensure proper relay of DHCP requests to the undercloud, you might need to configure a DHCP relay. 3.2. Configuring a DHCP relay You run the DHCP relay service on a switch, router, or server that is connected to the remote network segment you want to forward the requests from. Note Do not run the DHCP relay service on the undercloud. The undercloud uses two DHCP servers on the provisioning network: An introspection DHCP server. A provisioning DHCP server. You must configure the DHCP relay to forward DHCP requests to both DHCP servers on the undercloud. You can use UDP broadcast with devices that support it to relay DHCP requests to the L2 network segment where the undercloud provisioning network is connected. Alternatively, you can use UDP unicast, which relays DHCP requests to specific IP addresses. Note Configuration of DHCP relay on specific device types is beyond the scope of this document. As a reference, this document provides a DHCP relay configuration example using the implementation in ISC DHCP software. For more information, see manual page dhcrelay(8). Important DHCP option 79 is required for some relays, particularly relays that serve DHCPv6 addresses, and relays that do not pass on the originating MAC address. For more information, see RFC6939 . Broadcast DHCP relay This method relays DHCP requests using UDP broadcast traffic onto the L2 network segment where the DHCP server or servers reside. All devices on the network segment receive the broadcast traffic. When using UDP broadcast, both DHCP servers on the undercloud receive the relayed DHCP request. Depending on the implementation, you can configure this by specifying either the interface or IP network address: Interface Specify an interface that is connected to the L2 network segment where the DHCP requests are relayed. IP network address Specify the network address of the IP network where the DHCP requests are relayed. Unicast DHCP relay This method relays DHCP requests using UDP unicast traffic to specific DHCP servers. When you use UDP unicast, you must configure the device that provides the DHCP relay to relay DHCP requests to both the IP address that is assigned to the interface used for introspection on the undercloud and the IP address of the network namespace that the OpenStack Networking (neutron) service creates to host the DHCP service for the ctlplane network. The interface used for introspection is the one defined as inspection_interface in the undercloud.conf file. If you have not set this parameter, the default interface for the undercloud is br-ctlplane . Note It is common to use the br-ctlplane interface for introspection. The IP address that you define as the local_ip in the undercloud.conf file is on the br-ctlplane interface. The IP address allocated to the Neutron DHCP namespace is the first address available in the IP range that you configure for the local_subnet in the undercloud.conf file. The first address in the IP range is the one that you define as dhcp_start in the configuration. For example, 192.168.10.10 is the IP address if you use the following configuration: Warning The IP address for the DHCP namespace is automatically allocated. In most cases, this address is the first address in the IP range. To verify that this is the case, run the following commands on the undercloud: Example dhcrelay configuration In the following examples, the dhcrelay command in the dhcp package uses the following configuration: Interfaces to relay incoming DHCP request: eth1 , eth2 , and eth3 . Interface the undercloud DHCP servers on the network segment are connected to: eth0 . The DHCP server used for introspection is listening on IP address: 192.168.10.1 . The DHCP server used for provisioning is listening on IP address 192.168.10.10 . This results in the following dhcrelay command: dhcrelay version 4.2.x: dhcrelay version 4.3.x and later: Example Cisco IOS routing switch configuration This example uses the following Cisco IOS configuration to perform the following tasks: Configure a VLAN to use for the provisioning network. Add the IP address of the leaf. Forward UDP and BOOTP requests to the introspection DHCP server that listens on IP address: 192.168.10.1 . Forward UDP and BOOTP requests to the provisioning DHCP server that listens on IP address 192.168.10.10 . Now that you have configured the provisioning network, you can configure the remaining overcloud leaf networks. 3.3. Creating flavors and tagging nodes for leaf networks Each role in each leaf network requires a flavor and role assignment so that you can tag nodes into their respective leaf. Complete the following steps to create and assign each flavor to a role. Procedure Source the stackrc file: Create flavors for each custom role: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Retrieve a list of your nodes to identify their UUIDs: Tag each bare metal node to its leaf network and role by using a custom resource class: Replace <node> with the ID of the bare metal node. For example, enter the following command to tag a node with UUID 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 to the Compute role on Leaf2: Associate each leaf network role flavor with the custom resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal Provisioning service node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. In the node-info.yaml file, specify the flavor that you want to use for each custom leaf role, and the number of nodes to allocate for each custom leaf role. For example, the following configuration specifies the flavor to use, and the number of nodes to allocate for the custom leaf roles compute_leaf0 , compute_leaf1 , compute_leaf2 , ceph-storage_leaf0 , ceph-storage_leaf1 , and ceph-storage_leaf2 : 3.4. Mapping bare metal node ports to control plane network segments To enable deployment on a L3 routed network, you must configure the physical_network field on the bare metal ports. Each bare metal port is associated with a bare metal node in the OpenStack Bare Metal (ironic) service. The physical network names are the names that you include in the subnets option in the undercloud configuration. Note The physical network name of the subnet specified as local_subnet in the undercloud.conf file is always named ctlplane . Procedure Source the stackrc file: Check the bare metal nodes: Ensure that the bare metal nodes are either in enroll or manageable state. If the bare metal node is not in one of these states, the command that sets the physical_network property on the baremetal port fails. To set all nodes to manageable state, run the following command: Check which baremetal ports are associated with which baremetal node: Set the physical-network parameter for the ports. In the example below, three subnets are defined in the configuration: leaf0 , leaf1 , and leaf2 . The local_subnet is leaf0 . Because the physical network for the local_subnet is always ctlplane , the baremetal port connected to leaf0 uses ctlplane. The remaining ports use the other leaf names: Introspect the nodes before you deploy the overcloud. Include the --all-manageable and --provide options to set the nodes as available for deployment: 3.5. Adding a new leaf to a spine-leaf provisioning network When increasing network capacity which can include adding new physical sites, you might need to add a new leaf and a corresponding subnet to your Red Hat OpenStack Platform spine-leaf provisioning network. When provisioning a leaf on the overcloud, the corresponding undercloud leaf is used. Prerequisites Your RHOSP deployment uses a spine-leaf network topology. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: In the /home/stack/undercloud.conf file, do the following: Locate the subnets parameter, and add a new subnet for the leaf that you are adding. A subnet represents an L2 segment in the routed spine and leaf: Example In this example, a new subnet ( leaf3 ) is added for the new leaf ( leaf3 ): Create a section for the subnet that you added. Example In this example, the section [leaf3] is added for the new subnet ( leaf3 ): Save the undercloud.conf file. Reinstall your undercloud: Additional resources Adding a new leaf to a spine-leaf deployment | [
"[stack@director ~]USD cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf",
"local_ip = 192.168.10.1/24",
"undercloud_public_host = 10.1.1.1",
"undercloud_admin_host = 192.168.10.2",
"local_interface = eth1",
"enable_routed_networks = true",
"subnets = leaf0,leaf1,leaf2",
"local_subnet = leaf0",
"undercloud_nameservers = 10.11.5.19,10.11.5.20",
"[leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False [leaf1] cidr = 192.168.11.0/24 dhcp_start = 192.168.11.10 dhcp_end = 192.168.11.90 inspection_iprange = 192.168.11.100,192.168.11.190 gateway = 192.168.11.1 masquerade = False [leaf2] cidr = 192.168.12.0/24 dhcp_start = 192.168.12.10 dhcp_end = 192.168.12.90 inspection_iprange = 192.168.12.100,192.168.12.190 gateway = 192.168.12.1 masquerade = False",
"[stack@director ~]USD openstack undercloud install",
"[DEFAULT] local_subnet = leaf0 subnets = leaf0,leaf1,leaf2 [leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False",
"openstack port list --device-owner network:dhcp -c \"Fixed IP Addresses\" +----------------------------------------------------------------------------+ | Fixed IP Addresses | +----------------------------------------------------------------------------+ | ip_address='192.168.10.10', subnet_id='7526fbe3-f52a-4b39-a828-ec59f4ed12b2' | +----------------------------------------------------------------------------+ openstack subnet show 7526fbe3-f52a-4b39-a828-ec59f4ed12b2 -c name +-------+--------+ | Field | Value | +-------+--------+ | name | leaf0 | +-------+--------+",
"sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 -i eth0 -i eth1 -i eth2 -i eth3",
"sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 -iu eth0 -id eth1 -id eth2 -id eth3",
"interface vlan 2 ip address 192.168.24.254 255.255.255.0 ip helper-address 192.168.10.1 ip helper-address 192.168.10.10 !",
"[stack@director ~]USD source ~/stackrc",
"ROLES=\"control compute_leaf0 compute_leaf1 compute_leaf2 ceph-storage_leaf0 ceph-storage_leaf1 ceph-storage_leaf2\" for ROLE in USDROLES; do openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> USDROLE ; done for ROLE in USDROLES; do openstack flavor set --property \"cpu_arch\"=\"x86_64\" --property \"capabilities:boot_option\"=\"local\" --property resources:DISK_GB='0' --property resources:MEMORY_MB='0' --property resources:VCPU='0' USDROLE ; done",
"(undercloud)USD openstack baremetal node list",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.LEAF-ROLE <node>",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.COMPUTE-LEAF2 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13",
"(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_LEAF_ROLE=1 <custom_role>",
"parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeLeaf0Flavor: compute_leaf0 OvercloudComputeLeaf1Flavor: compute_leaf1 OvercloudComputeLeaf2Flavor: compute_leaf2 OvercloudCephStorageLeaf0Flavor: ceph-storage_leaf0 OvercloudCephStorageLeaf1Flavor: ceph-storage_leaf1 OvercloudCephStorageLeaf2Flavor: ceph-storage_leaf2 ControllerLeaf0Count: 3 ComputeLeaf0Count: 3 ComputeLeaf1Count: 3 ComputeLeaf2Count: 3 CephStorageLeaf0Count: 3 CephStorageLeaf1Count: 3 CephStorageLeaf2Count: 3",
"source ~/stackrc",
"openstack baremetal node list",
"for node in USD(openstack baremetal node list -f value -c Name); do openstack baremetal node manage USDnode --wait; done",
"openstack baremetal port list --node <node-uuid>",
"openstack baremetal port set --physical-network ctlplane <port-uuid> openstack baremetal port set --physical-network leaf1 <port-uuid> openstack baremetal port set --physical-network leaf2 <port-uuid>",
"openstack overcloud node introspect --all-manageable --provide",
"source ~/stackrc",
"subnets = leaf0,leaf1,leaf2,leaf3",
"[leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False [leaf1] cidr = 192.168.11.0/24 dhcp_start = 192.168.11.10 dhcp_end = 192.168.11.90 inspection_iprange = 192.168.11.100,192.168.11.190 gateway = 192.168.11.1 masquerade = False [leaf2] cidr = 192.168.12.0/24 dhcp_start = 192.168.12.10 dhcp_end = 192.168.12.90 inspection_iprange = 192.168.12.100,192.168.12.190 gateway = 192.168.12.1 masquerade = False [leaf3] cidr = 192.168.13.0/24 dhcp_start = 192.168.13.10 dhcp_end = 192.168.13.90 inspection_iprange = 192.168.13.100,192.168.13.190 gateway = 192.168.13.1 masquerade = False",
"openstack undercloud install"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/distributed_compute_node_and_storage_deployment/assembly_configuring-routed-spine-leaf-in-the-undercloud |
Virtual Server Administration | Virtual Server Administration Red Hat Enterprise Linux 4 Linux Virtual Server (LVS) for Red Hat Enterprise Linux Edition 1.0 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/index |
Chapter 3. Creating networks with the director Operator | Chapter 3. Creating networks with the director Operator Use the OpenStackNetConfig resource to create networks and bridges on OpenShift Virtualization worker nodes to connect your virtual machines to these networks. You must create one control plane network for your overcloud and additional networks to implement network isolation for your composable networks. 3.1. Understanding virtual machine bridging with OpenStackNet When you create virtual machines with the OpenStackVMSet resource, you must connect these virtual machines to the relevant Red Hat OpenStack Platform (RHOSP) networks. The OpenStackNetConfig resource includes an attachConfigurations option which is a hash of nodeNetworkConfigurationPolicy . Each specified attachConfiguration in the OpenStackNetConfig creates an OpenStackNet Attachment, which passes network interface data to the NodeNetworkConfigurationPolicy resource in OpenShift. The NodeNetworkConfigurationPolicy resource uses the nmstate API to configure the end state of the network configuration on each OCP worker node. Each network, configured in the OpenStackNetConfig, references one of the attachConfigurations . Inside the virtual machines, there is one interface per network. Through this method, you can create required bridges on OCP worker nodes and connect your Controller virtual machines to RHOSP networks. For example, if you create a br-osp attachConfiguration and set the nodeNetworkConfigurationPolicy option to create a Linux bridge and connect the bridge to a NIC on each worker, the NodeNetworkConfigurationPolicy resource configures each OCP worker node to match this desired end state: After you apply this configuration, each worker contains a new bridge named br-osp , which is connected to the enp6s0 NIC on each host. Dedicated NICs are required to deploy RHOSP. All RHOSP Controller virtual machines can connect to the br-osp bridge for control plane network traffic. If you specify an Internal API network through VLAN 20, you can set the attachConfiguration option to modify the networking configuration on each OCP worker node and connect the VLAN to the existing br-osp bridge: The br-osp already exists and is connected to the enp6s0 NIC on each host, so no change occurs to the bridge itself. However, the InternalAPI OpenStackNet associates VLAN 20 to this network, which means RHOSP Controller virtual machines can connect to the VLAN 20 on the br-osp bridge for Internal API network traffic. When you create virtual machines with the OpenStackVMSet resource, the virtual machines use multiple Virtio devices connected to each network. OpenShift Virtualization sorts the network names in alphabetical order except for the default network, which is always the first interface. For example, if you create the default RHOSP networks with OpenStackNetConfig, the interface configuration for Controller virtual machines resembles the following example: This configuration results in the following network-to-interface mapping for Controller nodes: Table 3.1. Default network-to-interface mapping Network Interface default nic1 ctlplane nic2 external nic3 internalapi nic4 storage nic5 storagemgmt nic6 tenant nic7 Note The role NIC template used by OpenStackVMSet is auto generated. It can be overwritten by adding a nic-template.role.j2 file to your tarball file. Include the binary contents of the tarball file in an OpenShift ConfigMap names tripleo-tarball-config . Additional resources "Updating node network configuration" 3.2. Creating an overcloud control plane network with OpenStackNetConfig You must define at least one control plane network for your overcloud in OpenStackNetConfig. In addition to IP address assignment, the network definition includes the mapping information for OpenStackNetAttachment. OpenShift Virtualization uses this information to attach any virtual machines to the network. Prerequisites Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. Ensure that you have installed the oc command line tool on your workstation. Procedure Create a file named osnetconfig.yaml on your workstation. Include the resource specification for the control plane network, which is named ctlplane . For example, the specification for a control plane that uses a Linux bridge connected to the enp6s0 Ethernet device on each worker node is as follows: Set the following values in the networks specification: name Set to the name of the control plane network, which is Control. nameLower Set to the lower name of the control plane network, which is ctlplane. subnets Set the subnet specifications. subnets.name Set the name of the control plane subnet, which is ctlplane. subnets.attachConfiguration Set the reference to which of the attach configuration should be used. subnets.ipv4 Details of the ipv4 subnet with allocationStart, allocationEnd, cidr, gateway and optional list of routes (with destination and nexthop) For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknetconfig CRD: Save the file when you have finished configuring the network specification. Create the control plane network: Verification View the resource for the control plane network: 3.3. Creating VLAN networks for network isolation with OpenStackNetConfig You must create additional networks to implement network isolation for your composable networks. To accomplish this network isolation, you can place your composable networks on individual VLAN networks. In addition to IP address assignment, the OpenStackNetConfig resource includes information to define the network configuration policy that OpenShift Virtualization uses to attach any virtual machines to VLAN networks. To use the default Red Hat OpenStack Platform networks, you must create an OpenStackNetConfig resource which defines each network. Table 3.2. Default Red Hat OpenStack Platform networks Network VLAN CIDR Allocation External 10 10.0.0.0/24 10.0.0.10 - 10.0.0.250 InternalApi 20 172.17.0.0/24 172.17.0.10 - 172.17.0.250 Storage 30 172.18.0.0/24 172.18.0.10 - 172.18.0.250 StorageMgmt 40 172.19.0.0/24 172.19.0.10 - 172.19..250 Tenant 50 172.20.0.0/24 172.20.0.10 - 172.20.0.250 Important To use different networking details for each network, you must create a custom network_data.yaml file. Prerequisites Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. Ensure that you have installed the oc command line tool on your workstation. Procedure Create a file for your network configuration. Include the resource specification for the VLAN network. For example, the specification for internal API, storage, storage mgmt, tenant, and external network that manages VLAN-tagged traffic over Linux bridges br-ex and br-osp connected to the enp6s0 and enp7s0 Ethernet device on each worker node is as follows: When you use VLAN for network isolation with linux-bridge the following happens: The director Operator creates a Node Network Configuration Policy for the bridge interface specified in the resource, which uses nmstate to configure the bridge on worker nodes. The director Operator creates a Network Attach Definition for each network, which defines the Multus CNI plugin configuration. When you specify the VLAN ID on the Network Attach Definition, the Multus CNI plugin enables vlan-filtering on the bridge. The director Operator attaches a dedicated interface for each network on a virtual machine. This means that the network template for the OpenStackVMSet is a multi-NIC network template. Set the following values in the resource specification: metadata.name Set to the name of the OpenStackNetConfig. spec Set the network configuration for attaching the networks and the network specifics. For descriptions of the values you can use in this section, view the specification schema in the custom resource definition for the openstacknetconfig CRD: Save the file when you have finished configuring the network specification. Create the network configuration: Verification View the OpenStackNetConfig API and created child resources: If you see errors, check the underlying network-attach-definition and node network configuration policies: 3.4. Configuring jumbo frames with OpenStackNetConfig To use Jumbo Frames for a bridge, you can create a configuration for the device to configure the correct MTU: 3.5. Static IP reservation with OpenStackNetConfig You can use the OpenStackNetConfig specification reservations parameter to reserve a static IP address per host and network. The reservations provided there are populated down to the` OpenStackNet` specifications reservations and have precedence over any auto generated IPs. The following example shows an overcloud with 3 Controllers and 2 Compute nodes, all nodes have static reservations except controller-2 and compute-1 : | [
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: \"\" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 ... networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 192.168.25.250 allocationStart: 192.168.25.100 cidr: 192.168.25.0/24 gateway: 192.168.25.1 attachConfiguration: br-osp",
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: ... networks: ... - isControlPlane: false mtu: 1500 name: InternalApi nameLower: internal_api subnets: - attachConfiguration: br-osp ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 gateway: 172.17.0.1 routes: - destination: 172.17.1.0/24 nexthop: 172.17.0.1 - destination: 172.17.2.0/24 nexthop: 172.17.0.1 name: internal_api vlan: 20",
"interfaces: - masquerade: {} model: virtio name: default - bridge: {} model: virtio name: ctlplane - bridge: {} model: virtio name: external - bridge: {} model: virtio name: internalapi - bridge: {} model: virtio name: storage - bridge: {} model: virtio name: storagemgmt - bridge: {} model: virtio name: tenant",
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: \"\" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 172.22.0.250 allocationStart: 172.22.0.100 cidr: 172.22.0.0/24 gateway: 172.22.0.1 attachConfiguration: br-osp # optional: configure static mapping for the networks per nodes. If there is none, a random gets created reservations: controller-0: ipReservations: ctlplane: 172.22.0.120 compute-0: ipReservations: ctlplane: 172.22.0.140",
"oc describe crd openstacknetconfig",
"oc create -f osnetconfig.yaml -n openstack",
"oc get openstacknetconfig/openstacknetconfig",
"kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: \"\" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: \"\" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 172.22.0.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 172.22.0.250 allocationStart: 172.22.0.10 cidr: 172.22.0.0/24 gateway: 172.22.0.1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api attachConfiguration: br-osp vlan: 20 ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1500 subnets: - name: storage ipv4: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 vlan: 30 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1500 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 vlan: 40 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1500 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 vlan: 50 attachConfiguration: br-osp",
"oc describe crd openstacknetconfig",
"oc apply -f openstacknetconfig.yaml -n openstack",
"oc get openstacknetconfig/openstacknetconfig -n openstack oc get openstacknetattachment -n openstack oc get openstacknet -n openstack",
"oc get network-attachment-definitions -n openstack oc get nncp",
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: \"\" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 9000 - name: enp7s0 description: Configuring enp7s0 on workers type: ethernet state: up mtu: 9000",
"spec: ... reservations: compute-0: ipReservations: ctlplane: 172.22.0.140 internal_api: 172.17.0.40 storage: 172.18.0.40 tenant: 172.20.0.40 macReservations: {} controller-0: ipReservations: ctlplane: 172.22.0.120 external: 10.0.0.20 internal_api: 172.17.0.20 storage: 172.18.0.20 storage_mgmt: 172.19.0.20 tenant: 172.20.0.20 macReservations: {} controller-1: ipReservations: ctlplane: 172.22.0.130 external: 10.0.0.30 internal_api: 172.17.0.30 storage: 172.18.0.30 storage_mgmt: 172.19.0.30 tenant: 172.20.0.30 macReservations: {} controlplane: ipReservations: ctlplane: 172.22.0.110 external: 10.0.0.10 internal_api: 172.17.0.10 storage: 172.18.0.10 storage_mgmt: 172.19.0.10 macReservations: {} openstackclient-0: ipReservations: ctlplane: 172.22.0.251 external: 10.0.0.251 internal_api: 172.17.0.251 macReservations: {}"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/rhosp_director_operator_for_openshift_container_platform/assembly_creating-networks-with-the-director-operator_rhosp-director-operator |
Chapter 7. Configuring your Logging deployment | Chapter 7. Configuring your Logging deployment 7.1. About the Cluster Logging custom resource To configure logging subsystem for Red Hat OpenShift you customize the ClusterLogging custom resource (CR). 7.1.1. About the ClusterLogging custom resource To make changes to your logging subsystem environment, create and modify the ClusterLogging custom resource (CR). Instructions for creating or modifying a CR are provided in this documentation as appropriate. The following example shows a typical custom resource for the logging subsystem. Sample ClusterLogging custom resource (CR) apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" 1 namespace: "openshift-logging" 2 spec: managementState: "Managed" 3 logStore: type: "elasticsearch" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: 5 type: "kibana" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: "fluentd" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi 1 The CR name must be instance . 2 The CR must be installed to the openshift-logging namespace. 3 The Red Hat OpenShift Logging Operator management state. When set to unmanaged the operator is in an unsupported state and will not get updates. 4 Settings for the log store, including retention policy, the number of nodes, the resource requests and limits, and the storage class. 5 Settings for the visualizer, including the resource requests and limits, and the number of pod replicas. 6 Settings for the log collector, including the resource requests and limits. 7.2. Configuring the logging collector Logging subsystem for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes . All supported modifications to the log collector can be performed though the spec.collection.log.fluentd stanza in the ClusterLogging custom resource (CR). 7.2.1. About unsupported configurations The supported way of configuring the logging subsystem for Red Hat OpenShift is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator or OpenShift Elasticsearch Operator to Unmanaged . An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed . 7.2.2. Viewing logging collector pods You can view the Fluentd logging collector pods and the corresponding nodes that they are running on. The Fluentd logging collector pods run only in the openshift-logging project. Procedure Run the following command in the openshift-logging project to view the Fluentd logging collector pods and their details: USD oc get pods --selector component=collector -o wide -n openshift-logging Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none> 7.2.3. Configure log collector CPU and memory limits The log collector allows for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi 1 Specify the CPU and memory limits and requests as needed. The values shown are the default values. 7.2.4. Advanced configuration for the log forwarder The logging subsystem for Red Hat OpenShift includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors: Chunk and chunk buffer sizes Chunk flushing behavior Chunk forwarding retry behavior Fluentd collects log data in a single blob called a chunk . When Fluentd creates a chunk, the chunk is considered to be in the stage , where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue , where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured. By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. These parameters can help you determine the trade-offs between latency and throughput. To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries. You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd. Note These parameters are: Not relevant to most users. The default settings should give good general performance. Only for advanced users with detailed knowledge of Fluentd configuration and performance. Only for performance tuning. They have no effect on functional aspects of logging. Table 7.1. Advanced Fluentd Configuration Parameters Parameter Description Default chunkLimitSize The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. 8m totalLimitSize The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. 8G flushInterval The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days). 1s flushMode The method to perform flushes: lazy : Flush chunks based on the timekey parameter. You cannot modify the timekey parameter. interval : Flush chunks based on the flushInterval parameter. immediate : Flush chunks immediately after data is added to a chunk. interval flushThreadCount The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. 2 overflowAction The chunking behavior when the queue is full: throw_exception : Raise an exception to show in the log. block : Stop data chunking until the full buffer issue is resolved. drop_oldest_chunk : Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks. block retryMaxInterval The maximum time in seconds for the exponential_backoff retry method. 300s retryType The retry method when flushing fails: exponential_backoff : Increase the time between flush retries. Fluentd doubles the time it waits until the retry until the retry_max_interval parameter is reached. periodic : Retries flushes periodically, based on the retryWait parameter. exponential_backoff retryTimeOut The maximum time interval to attempt retries before the record is discarded. 60m retryWait The time in seconds before the chunk flush. 1s For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Add or modify any of the following parameters: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: "300s" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9 ... 1 Specify the maximum size of each chunk before it is queued for flushing. 2 Specify the interval between chunk flushes. 3 Specify the method to perform chunk flushes: lazy , interval , or immediate . 4 Specify the number of threads to use for chunk flushes. 5 Specify the chunking behavior when the queue is full: throw_exception , block , or drop_oldest_chunk . 6 Specify the maximum interval in seconds for the exponential_backoff chunk flushing method. 7 Specify the retry type when chunk flushing fails: exponential_backoff or periodic . 8 Specify the time in seconds before the chunk flush. 9 Specify the maximum size of the chunk buffer. Verify that the Fluentd pods are redeployed: USD oc get pods -l component=collector -n openshift-logging Check that the new values are in the fluentd config map: USD oc extract configmap/fluentd --confirm Example fluentd.conf <buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer> 7.2.5. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: logs: type: "fluentd" fluentd: {} Verify that the collector pods are redeployed: USD oc get pods -l component=collector -n openshift-logging Additional resources Forwarding logs to third-party systems 7.3. Configuring the log store Logging subsystem for Red Hat OpenShift uses Elasticsearch 6 (ES) to store and organize the log data. You can make modifications to your log store, including: storage for your Elasticsearch cluster shard replication across data nodes in the cluster, from full replication to no replication external access to Elasticsearch data Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16G of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory, up to a maximum of 64G for each Elasticsearch node. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments. 7.3.1. Forwarding audit logs to the log store By default, OpenShift Logging does not store audit logs in the internal OpenShift Container Platform Elasticsearch log store. You can send audit logs to this log store so, for example, you can view them in Kibana. To send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API. Important The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. Verify that the system to which you forward audit logs complies with your organizational and governmental regulations and is properly secured. The logging subsystem for Red Hat OpenShift does not comply with those regulations. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create or edit a YAML file that defines the ClusterLogForwarder CR object: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources For more information on the Log Forwarding API, see Forwarding logs using the Log Forwarding API . 7.3.2. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites The logging subsystem for Red Hat OpenShift and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When OpenShift Container Platform deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 7.3.3. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 7.3.4. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit clusterlogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 7.3.5. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 7.3.6. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 7.3.7. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 7.3.8. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the collector pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}' Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the collector pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}' 7.3.9. Exposing the log store service as a route By default, the log store that is deployed with the logging subsystem for Red Hat OpenShift is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 7.4. Configuring the log visualizer OpenShift Container Platform uses Kibana to display the log data collected by the logging subsystem. You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. 7.4.1. Configuring CPU and memory limits The logging subsystem components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: "fluentd" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 7.4.2. Scaling redundancy for the log visualizer nodes You can scale the pod that hosts the log visualizer for redundancy. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: visualization: type: "kibana" kibana: replicas: 1 1 1 Specify the number of Kibana nodes. 7.5. Configuring logging subsystem storage Elasticsearch is a memory-intensive application. The default logging subsystem installation deploys 16G of memory for both memory requests and memory limits. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments. 7.5.1. Storage considerations for the logging subsystem for Red Hat OpenShift A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/ to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 7.5.2. Additional resources Configuring persistent storage for the log store 7.6. Configuring CPU and memory limits for logging subsystem components You can configure both the CPU and memory limits for each of the logging subsystem components as needed. 7.6.1. Configuring CPU and memory limits The logging subsystem components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: "fluentd" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 7.7. Using tolerations to control OpenShift Logging pod placement You can use taints and tolerations to ensure that logging subsystem pods run on specific nodes and that no other workload can run on those nodes. Taints and tolerations are simple key:value pair. A taint on a node instructs the node to repel all pods that do not tolerate the taint. The key is any string, up to 253 characters and the value is any string up to 63 characters. The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. Sample logging subsystem CR with tolerations apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 tolerations: 1 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: tolerations: 2 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: "fluentd" fluentd: tolerations: 3 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi 1 This toleration is added to the Elasticsearch pods. 2 This toleration is added to the Kibana pod. 3 This toleration is added to the logging collector pods. 7.7.1. Using tolerations to control the log store pod placement You can control which nodes the log store pods runs on and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to the log store pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the log store pods can run on that node. By default, the log store pods have the following toleration: tolerations: - effect: "NoExecute" key: "node.kubernetes.io/disk-pressure" operator: "Exists" Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Use the following command to add a taint to a node where you want to schedule the OpenShift Logging pods: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 elasticsearch=node:NoExecute This example places a taint on node1 that has key elasticsearch , value node , and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that match the taint and remove existing pods that do not match. Edit the logstore section of the ClusterLogging CR to configure a toleration for the Elasticsearch pods: logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 tolerations: - key: "elasticsearch" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require a taint with the key elasticsearch to be present on the Node. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration could be scheduled onto node1 . 7.7.2. Using tolerations to control the log visualizer pod placement You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to the log visualizer pod through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the Kibana pod can run on that node. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Use the following command to add a taint to a node where you want to schedule the log visualizer pod: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 kibana=node:NoExecute This example places a taint on node1 that has key kibana , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and remove existing pods that do not match. Edit the visualization section of the ClusterLogging CR to configure a toleration for the Kibana pod: visualization: type: "kibana" kibana: tolerations: - key: "kibana" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1 . 7.7.3. Using tolerations to control the log collector pod placement You can ensure which nodes the logging collector pods run on and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to logging collector pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. You can use taints and tolerations to ensure the pod does not get evicted for things like memory and CPU issues. By default, the logging collector pods have the following toleration: tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoExecute" Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Use the following command to add a taint to a node where you want logging collector pods to schedule logging collector pods: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 collector=node:NoExecute This example places a taint on node1 that has key collector , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and removes existing pods that do not match. Edit the collection stanza of the ClusterLogging custom resource (CR) to configure a toleration for the logging collector pods: collection: logs: type: "fluentd" fluentd: tolerations: - key: "collector" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1 . 7.7.4. Additional resources Controlling pod placement using node taints . 7.8. Moving logging subsystem resources with node selectors You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes. 7.8.1. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s 7.9. Configuring systemd-journald and Fluentd Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services. We recommend setting RateLimitIntervalSec=30s and RateLimitBurst=10000 (or even higher if necessary) to prevent the journal from losing entries. 7.9.1. Configuring systemd-journald for OpenShift Logging As you scale up your project, the default logging environment might need some adjustments. For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that OpenShift Logging does not use excessive resources without dropping logs. You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.10.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: "worker" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10 1 Set the permissions for the journald.conf file. It is recommended to set 0644 permissions. 2 Specify whether you want logs compressed before they are written to the file system. Specify yes to compress the message or no to not compress. The default is yes . 3 Configure whether to forward log messages. Defaults to no for each. Specify: ForwardToConsole to forward logs to the system console. ForwardToKMsg to forward logs to the kernel log buffer. ForwardToSyslog to forward to a syslog daemon. ForwardToWall to forward messages as wall messages to all logged-in users. 4 Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter 0 to disable. The default is 1month . 5 Configure rate limiting. If more logs are received than what is specified in RateLimitBurst during the time interval defined by RateLimitIntervalSec , all further messages within the interval are dropped until the interval is over. It is recommended to set RateLimitIntervalSec=30s and RateLimitBurst=10000 , which are the defaults. 6 Specify how logs are stored. The default is persistent : volatile to store logs in memory in /var/log/journal/ . persistent to store logs to disk in /var/log/journal/ . systemd creates the directory if it does not exist. auto to store logs in /var/log/journal/ if the directory exists. If it does not exist, systemd temporarily stores logs in /run/systemd/journal . none to not store logs. systemd drops all logs. 7 Specify the timeout before synchronizing journal files to disk for ERR , WARNING , NOTICE , INFO , and DEBUG logs. systemd immediately syncs after receiving a CRIT , ALERT , or EMERG log. The default is 1s . 8 Specify the maximum size the journal can use. The default is 8G . 9 Specify how much disk space systemd must leave free. The default is 20% . 10 Specify the maximum size for individual journal files stored persistently in /var/log/journal . The default is 10M . Note If you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled. For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html . The default settings listed on that page might not apply to OpenShift Container Platform. Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config. For example: USD oc apply -f 40-worker-custom-journald.yaml The controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Monitor the status of the rollout of the new rendered configuration to each node: USD oc describe machineconfigpool/worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e 7.10. Maintenance and support 7.10.1. About unsupported configurations The supported way of configuring the logging subsystem for Red Hat OpenShift is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator or OpenShift Elasticsearch Operator to Unmanaged . An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed . 7.10.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the unmanaged state to modify the following components: The Elasticsearch CR The Kibana deployment The fluent.conf file The Fluentd daemon set You must set the OpenShift Elasticsearch Operator to the unmanaged state to modify the following component: the Elasticsearch deployment files. Explicitly unsupported cases include: Configuring default log rotation . You cannot modify the default log rotation configuration. Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 7.10.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. | [
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: managementState: \"Managed\" 3 logStore: type: \"elasticsearch\" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: 5 type: \"kibana\" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: \"fluentd\" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc get pods --selector component=collector -o wide -n openshift-logging",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/fluentd --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer>",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: logs: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc edit clusterlogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch-",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"oc edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 tolerations: 1 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: \"ZeroRedundancy\" visualization: type: \"kibana\" kibana: tolerations: 2 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: tolerations: 3 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"tolerations: - effect: \"NoExecute\" key: \"node.kubernetes.io/disk-pressure\" operator: \"Exists\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 elasticsearch=node:NoExecute",
"logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 1 tolerations: - key: \"elasticsearch\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 kibana=node:NoExecute",
"visualization: type: \"kibana\" kibana: tolerations: - key: \"kibana\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"tolerations: - key: \"node-role.kubernetes.io/master\" operator: \"Exists\" effect: \"NoExecute\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"collection: logs: type: \"fluentd\" fluentd: tolerations: - key: \"collector\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"variant: openshift version: 4.10.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/configuring-your-logging-deployment |
Chapter 16. Networking (neutron) Parameters | Chapter 16. Networking (neutron) Parameters Parameter Description DhcpAgentNotification Enables DHCP agent notifications. The default value is True . DockerAdditionalSockets Additional domain sockets for the docker daemon to bind to (useful for mounting into containers that launch other containers). The default value is ['/var/lib/openstack/docker.sock'] . EnableVLANTransparency If True, then allow plugins that support it to create VLAN transparent networks. The default value is False . NeutronAllowL3AgentFailover Allow automatic l3-agent failover. The default value is True . NeutronApiOptEnvVars Hash of optional environment variables. NeutronApiOptVolumes List of optional volumes to be mounted. NeutronBridgeMappings The logical to physical bridge mappings to use. The default ( datacentre:br-ex ) maps br-ex (the external bridge on hosts) to a physical name datacentre , which provider networks can use (for example, the default floating network). If changing this, either use different post-install network scripts or be sure to keep datacentre as a mapping network name. The default value is datacentre:br-ex . NeutronCorePlugin The core plugin for networking. The value should be the entrypoint to be loaded from neutron.core_plugins namespace. The default value is ml2 . NeutronDBSyncExtraParams String of extra command line parameters to append to the neutron-db-manage upgrade head command. NeutronDefaultAvailabilityZones Comma-separated list of default network availability zones to be used by OpenStack Networking (neutron) if its resource is created without availability zone hints. If not set, no AZs will be configured for OpenStack Networking (neutron) network services. NeutronDhcpAgentsPerNetwork The number of DHCP agents to schedule per network. The default value is 0 . NeutronDhcpLoadType Additional to the availability zones aware network scheduler. The default value is networks . NeutronDnsDomain Domain to use for building the hostnames. The default value is openstacklocal . NeutronEnableDVR Enable Distributed Virtual Router. NeutronEnableIgmpSnooping Enable IGMP Snooping. The default value is False . NeutronFirewallDriver Firewall driver for realizing OpenStack Networking (neutron) security group function. The default value is iptables_hybrid . NeutronFlatNetworks Sets the flat network name to configure in plugins. The default value is datacentre . NeutronGeneveMaxHeaderSize Geneve encapsulation header size. The default value is 38 . NeutronGlobalPhysnetMtu MTU of the underlying physical network. OpenStack Networking (neutron) uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, OpenStack Networking uses this value without modification. For overlay networks such as VXLAN, OpenStack Networking automatically subtracts the overlay protocol overhead from this value. The default value is 0 . NeutronML2PhysicalNetworkMtus A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val> . This mapping allows you to specify a physical network MTU value that differs from the default segment_mtu value in ML2 plugin and overwrites values from global_physnet_mtu for the selected network. NeutronMechanismDrivers The mechanism drivers for the OpenStack Networking (neutron) tenant network. The default value is ovn . NeutronMetadataProxySharedSecret Shared secret to prevent spoofing. NeutronNetworkSchedulerDriver The network schedule driver to use for avialability zones. The default value is neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler . NeutronNetworkType The tenant network type for OpenStack Networking (neutron). The default value is geneve . NeutronNetworkVLANRanges The OpenStack Networking (neutron) ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the datacentre physical network (See NeutronBridgeMappings ). The default value is datacentre:1:1000 . NeutronOverlayIPVersion IP version used for all overlay network endpoints. The default value is 4 . NeutronOvsIntegrationBridge Name of Open vSwitch bridge to use. NeutronPassword The password for the OpenStack Networking (neutron) service and database account. NeutronPluginExtensions Comma-separated list of enabled extension plugins. The default value is qos,port_security,dns . NeutronPluginMl2PuppetTags Puppet resource tag names that are used to generate configuration files with puppet. The default value is neutron_plugin_ml2 . NeutronPortQuota Number of ports allowed per tenant, and minus means unlimited. The default value is 500 . NeutronRouterSchedulerDriver The router schedule driver to use for avialability zones. The default value is neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler . NeutronRpcWorkers Sets the number of RPC workers for the OpenStack Networking (neutron) service. If not specified, it'll take the value of NeutronWorkers and if this is not specified either, the default value results in the configuration being left unset and a system-dependent default will be chosen (usually 1). NeutronServicePlugins Comma-separated list of service plugin entrypoints. The default value is qos,ovn-router,trunk,segments . NeutronTunnelIdRanges Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation. The default value is ['1:4094'] . NeutronTypeDrivers Comma-separated list of network type driver entrypoints to be loaded. The default value is geneve,vlan,flat . NeutronVhostuserSocketDir The vhost-user socket directory for OVS. NeutronVniRanges Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation. The default value is ['1:65536'] . NeutronWorkers Sets the number of API and RPC workers for the OpenStack Networking service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. NotificationDriver Driver or drivers to handle sending notifications. The default value is messagingv2 . OVNCMSOptions The CMS options to configure in ovs db. OVNDbConnectionTimeout Timeout in seconds for the OVSDB connection transaction. The default value is 180 . OVNDnsServers List of servers to use as as dns forwarders. OVNEnableHaproxyDockerWrapper Generate a wrapper script so that haproxy is launched in a separate container. The default value is True . OVNIntegrationBridge Name of the OVS bridge to use as integration bridge by OVN Controller. The default value is br-int . OVNMetadataEnabled Whether Metadata Service has to be enabled. The default value is True . OVNNeutronSyncMode The synchronization mode of OVN with OpenStack Networking (neutron) DB. The default value is log . OVNNorthboundServerPort Port of the OVN Northbound DB server. The default value is 6641 . OVNOpenflowProbeInterval The inactivity probe interval of the OpenFlow connection to the OpenvSwitch integration bridge, in seconds. The default value is 60 . OVNQosDriver OVN notification driver for OpenStack Networking (neutron) QOS service plugin. The default value is ovn-qos . OVNRemoteProbeInterval Probe interval in ms. The default value is 60000 . OVNSouthboundServerPort Port of the Southbound DB Server. The default value is 6642 . OVNVifType Type of VIF to be used for ports. The default value is ovs . OvsHwOffload Enable OVS Hardware Offload. This feature supported from OVS 2.8.0. The default value is False . TenantNetPhysnetMtu MTU of the underlying physical network. OpenStack Networking (neutron) uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, OpenStack Networking (neutron) uses this value without modification. For overlay networks such as VXLAN, OpenStack Networking (neutron) automatically subtracts the overlay protocol overhead from this value. (The mtu setting of the Tenant network in network_data.yaml control's this parameter.). The default value is 1500 . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/networking-neutron-parameters |
Chapter 36. KIE Server ZIP file installation and configuration | Chapter 36. KIE Server ZIP file installation and configuration You can install KIE Server using the rhpam-7.13.5-kie-server-jws.zip file available from the Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) file on the Customer Portal and then configure the Java Database Connectivity (JDBC) web server data sources on Red Hat JBoss Web Server . 36.1. Installing KIE Server from ZIP files KIE Server provides the runtime environment for business assets and accesses the data stored in the assets repository (knowledge store). You can use ZIP files to install KIE Server on an existing Red Hat JBoss Web Server 5.5.1 server instance. Note To use the installer JAR file to install KIE Server, see Chapter 35, Using the Red Hat Process Automation Manager installer . The following files have been downloaded, as described in Chapter 34, Downloading the Red Hat Process Automation Manager installation files : Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) Red Hat Process Automation Manager 7.13.5 Maven Repository ( rhpam-7.13.5-maven-repository.zip ) A backed-up Red Hat JBoss Web Server 5.5.1 server installation is available. The base directory of the Red Hat JBoss Web Server installation is referred to as JWS_HOME . Sufficient user permissions to complete the installation are granted. Procedure Extract the rhpam-7.13.5-add-ons.zip file. From the extracted rhpam-7.13.5-add-ons.zip file, extract the following files: rhpam-7.13.5-kie-server-jws.zip rhpam-7.13.5-process-engine.zip In the following instructions, the directory that contains the extracted rhpam-7.13.5-kie-server-jws.zip file is called JWS_TEMP_DIR and the directory that contains the extracted rhpam-7.13.5-process-engine.zip file is called ENGINE_TEMP_DIR . Copy the JWS_TEMP_DIR/rhpam-7.13.5-kie-server-jws/kie-server.war directory to the JWS_HOME /tomcat/webapps directory. Note Ensure the names of the Red Hat Process Automation Manager deployments you copy do not conflict with your existing deployments in the Red Hat JBoss Web Server instance. Remove the .war extensions from the kie-server.war folder. Move the kie-tomcat-integration-7.67.0.Final-redhat-00024.jar file from the ENGINE_TEMP_DIR directory to the JWS_HOME /tomcat/lib directory. Move the jboss-jacc-api-<VERSION>.jar , slf4j-api-<VERSION>.jar , and slf4j-jdk14-<VERSION>.jar files from the ENGINE_TEMP_DIR/lib directory to the JWS_HOME /tomcat/lib directory, where <VERSION> is the version artifact file name, in the lib directory. Add the following line to the <host> element in the JWS_HOME /tomcat/conf/server.xml file after the last Valve definition: Open the JWS_HOME /tomcat/conf/tomcat-users.xml file in a text editor. Add users and roles to the JWS_HOME /tomcat/conf/tomcat-users.xml file. In the following example, <ROLE_NAME> is a role supported by Red Hat Process Automation Manager. <USER_NAME> and <USER_PWD> are the user name and password of your choice: If a user has more than one role, as shown in the following example, separate the roles with a comma: Complete one of the following steps in the JWS_HOME /tomcat/bin directory: On Linux or UNIX, create the setenv.sh file with the following content: On Windows, add the following content to the setenv.bat file: Note If you use Microsoft SQL Server, make sure you have appropriate transaction isolation for your database. If you do not, you may experience deadlocks. The recommended configuration is to turn on ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT by entering the following statements: 36.2. Configuring JDBC Web Server data sources Java Database Connectivity (JDBC) is an API specification used to connect programs written in Java to databases. A data source is an object that enables a Java Database Connectivity (JDBC) client, such as an application server, to establish a connection with a database. Applications look up the data source on the Java Naming and Directory Interface (JNDI) tree or in the local application context and request a database connection to retrieve data. You must configure data sources for KIE Server to ensure correct data exchange between the servers and the designated database. Typically, solutions using Red Hat Process Automation Manager manage several resources within a single transaction. JMS for asynchronous jobs, events, and timers, for example. Red Hat Process Automation Manager requires an XA driver in the datasource when possible to ensure data atomicity and consistent results. If transactional code for different schemas exists inside listeners or derives from hooks provided by the jBPM engine, an XA driver is also required. Do not use non-XA datasources unless you are positive you do not have multiple resources participating in single transactions. Note For production environments, specify an actual data source. Do not use the example data source in production environments. Prerequisites Red Hat Process Automation Manager is installed on Red Hat JBoss Web Server. The Red Hat Process Automation Manager 7.13.5 Maven Repository ( rhpam-7.13.5-maven-repository.zip ) and the Red Hat Process Automation Manager 7.13.x Add-Ons ( rhpam-7.13.5-add-ons.zip ) files have been downloaded, as described in Chapter 34, Downloading the Red Hat Process Automation Manager installation files . You want to configure one of the following supported databases and Hibernate dialects: DB2: org.hibernate.dialect.DB2Dialect MSSQL: org.hibernate.dialect.SQLServer2012Dialect MySQL: org.hibernate.dialect.MySQL5InnoDBDialect MariaDB: org.hibernate.dialect.MySQL5InnoDBDialect Oracle: org.hibernate.dialect.Oracle10gDialect PostgreSQL: org.hibernate.dialect.PostgreSQL82Dialect PostgreSQL plus: org.hibernate.dialect.PostgresPlusDialect Sybase: org.hibernate.dialect.SybaseASE157Dialect Procedure Complete the following steps to prepare your database: Extract rhpam-7.13.5-add-ons.zip in a temporary directory, for example TEMP_DIR . Extract TEMP_DIR/rhpam-7.13.5-migration-tool.zip . Change your current directory to the TEMP_DIR/rhpam-7.13.5-migration-tool/ddl-scripts directory. This directory contains DDL scripts for several database types. Import the DDL script for your database type into the database that you want to use, for example: psql jbpm < /ddl-scripts/postgresql/postgresql-jbpm-schema.sql Note If you are using PostgreSQL or Oracle in conjunction with Spring Boot, you must import the respective Spring Boot DDL script, for example /ddl-scripts/oracle/oracle-springboot-jbpm-schema.sql or /ddl-scripts/postgresql/postgresql-springboot-jbpm-schema.sql . Extract the rhpam-7.13.5-maven-repository.zip offline Maven repository file. Copy the following libraries from the extracted offline Maven repository to the JWS_HOME/tomcat/lib folder where VERSION is the version of that library: Copy your database JDBC driver to the JWS_HOME/tomcat/lib folder. Configure the pooling XA data source in the JWS_HOME/tomcat/conf/context.xml file: Note Some of the properties in the following examples might not apply to your database server. Check the documentation for your JDBC driver to determine which properties to set. Configure an XA data source without pooling capabilities. This XA data source is used to create new connections to the target database. In the following example, the XA datasource is xads and the variables are defined in Table 36.1, "XA data source variables" : Table 36.1. XA data source variables Variable Description <datasource.dbName> The name of the database. <datasource.port> The port number of the database. <datasource.hostname> The name of the database host. <datasource.class> XADataSource class of JDBC driver. <datasource.url> The JDBC database connection URL. With some databases, the URL property is url and with other databases (for example H2 databases) this property is URL . <datasource.username> User name for the database connection. <datasource.password> Password for the database connection. <datasource.schema> The database schema. Configure a pooling data source that relies on the XA data source for creating new connections. In this example, the data source is poolingXaDs , <datasource.username> is the user name for the database connection, and <datasource.password> is the password for the database connection: The data source is now available under the java:comp/env/poolingXaDs JNDI name and passes it to KIE Server through the org.kie.server.persistence.ds system property as described in the steps. Note The pooling data source configuration relies on additional resources that have been previously configured in context.xml file in kie-server application, specifically TransactionManager and TransactionSynchronizationRegistry . Configure KIE Server to use the data source: Open one of the following scripts in a text editor: Note The setenv.sh or setenv.bat script should already exist. However, if it does not, create it. For Linux or Unix: For Windows: Add the following properties to CATALINA_OPS where <hibernate.dialect> is the Hibernate dialect for your database: | [
"<Valve className=\"org.kie.integration.tomcat.JACCValve\" />",
"<role rolename=\"<ROLE_NAME>\"/> <user username=\"<USER_NAME>\" password=\"<USER_PWD>\" roles=\"<ROLE_NAME>\"/>",
"<role rolename=\"admin\"/> <role rolename=\"kie-server\"/> <user username=\"rhpamUser\" password=\"user1234\" roles=\"admin,kie-server\"/>",
"CATALINA_OPTS=\"-Xmx1024m -Dorg.jboss.logging.provider=jdk\"",
"set CATALINA_OPTS=-Xmx1024m -Dorg.jboss.logging.provider=jdk",
"ALTER DATABASE <DBNAME> SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE <DBNAME> SET READ_COMMITTED_SNAPSHOT ON",
"psql jbpm < /ddl-scripts/postgresql/postgresql-jbpm-schema.sql",
"org/jboss/spec/javax/transaction/jboss-transaction-api_1.2_spec/{VERSION}/jboss-transaction-api_1.2_spec-{VERSION}.jar org/jboss/integration/narayana-tomcat/{VERSION}/narayana-tomcat-{VERSION}.jar org/jboss/narayana/jta/narayana-jta/{VERSION}/narayana-jta-{VERSION}.jar org/jboss/jboss-transaction-spi/{VERSION}/jboss-transaction-spi-{VERSION}.jar",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <Context> <Resource auth=\"Container\" databaseName=\"USD{datasource.dbName}\" description=\"XA Data Source\" factory=\"org.apache.tomcat.jdbc.naming.GenericNamingResourcesFactory\" loginTimeout=\"0\" name=\"xads\" uniqueName=\"xads\" portNumber=\"USD{datasource.port}\" serverName=\"USD{datasource.hostname}\" testOnBorrow=\"false\" type=\"USD{datasource.class}\" url=\"USD{datasource.url}\" URL=\"USD{datasource.url}\" user=\"USD{datasource.username}\" password=\"USD{datasource.password}\" driverType=\"4\" schema=\"USD{datasource.schema}\" /> </Context>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <Context> <Resource name=\"poolingXaDs\" uniqueName=\"poolingXaDs\" auth=\"Container\" description=\"Pooling XA Data Source\" factory=\"org.jboss.narayana.tomcat.jta.TransactionalDataSourceFactory\" testOnBorrow=\"true\" transactionManager=\"TransactionManager\" transactionSynchronizationRegistry=\"TransactionSynchronizationRegistry\" type=\"javax.sql.XADataSource\" username=\"USD{datasource.username}\" password=\"USD{datasource.password}\" xaDataSource=\"xads\" /> </Context>",
"JWS_HOME/tomcat/bin/setenv.sh",
"JWS_HOME/tomcat/bin/setenv.bat",
"CATALINA_OPTS=\"-Xmx1024m -Dorg.jboss.logging.provider=jdk -Dorg.kie.server.persistence.ds=java:comp/env/poolingXaDs -Dorg.kie.server.persistence.tm=JBossTS -Dorg.kie.server.persistence.dialect=USD{<hibernate.dialect>}\""
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/kie_server_zip_file_installation_and_configuration |
Chapter 1. Event-Driven Ansible Automation | Chapter 1. Event-Driven Ansible Automation Event-Driven Ansible is a new way to connect to sources of events and act on those events using rulebooks. This technology improves IT speed and agility, and enables consistency and resilience. 1.1. Event-Driven Ansible benefits Event-Driven Ansible is designed for simplicity and flexibility. With these enhancements, you can: Automate decision making Use many event sources Implement event-driven automation within and across multiple IT use cases Achieve new milestones in efficiency, service delivery excellence and cost savings Event-Driven Ansible minimizes human error and automates processes to increase efficiency in troubleshooting and information gathering. This guide helps you get started with Event-Driven Ansible by providing links to information about understanding, installing, and using Event-Driven Ansible controller. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_event-driven_ansible_guide/assembly-about-event-driven-ansible-automation |
Chapter 1. Apicurio Registry 2.6 release notes | Chapter 1. Apicurio Registry 2.6 release notes Red Hat build of Apicurio Registry is a data store for standard event schemas and API designs, and is based on the Apicurio Registry open source community project. You can use Apicurio Registry to manage and share the structure of your data using a web console, REST API, Maven plug-in, or Java client. For example, client applications can dynamically push or pull the latest schema updates to or from Apicurio Registry without needing to redeploy. You can also create optional rules to govern how Apicurio Registry content evolves over time. These rules include validation of content, integrity of artifact references, and backwards or forwards compatibility of schema or API versions. 1.1. Apicurio Registry installation options You can install Apicurio Registry on OpenShift with either of the following data storage options: PostgreSQL database Red Hat AMQ Streams For more details, see Installing and deploying Red Hat build of Apicurio Registry on OpenShift . 1.2. Apicurio Registry supported platforms Apicurio Registry 2.6 supports the following core platforms: Red Hat OpenShift Container Platform: 4.16, 4.15, 4.14, 4.13, 4.12 Red Hat OpenShift Service on AWS: 4.14 Microsoft Azure Red Hat OpenShift: 4.15 PostgreSQL: 15, 14, 13, 12 Red Hat AMQ Streams: 2.7, 2.5, 2.2 OpenJDK: 17, 11 For more details, see the following article: Supported Configurations for Red Hat build of Apicurio Registry . 1.2.1. Supported integration with other products Apicurio Registry 2.6 also supports integration with the following products: Red Hat build of Keycloak 24 Red Hat Single Sign-On (RH-SSO) 7.6 Red Hat build of Debezium 2.3 1.3. Apicurio Registry new features Apicurio Registry 2.6 includes the following new features: Operator metadata versions With this release, Operator metadata versions match Apicurio Registry release versions. For releases, see the following article: Red Hat Integration - Service Registry Operator metadata versions . Support for Red Hat build of Keycloak 24 Red Hat Single Sign-On (RH-SSO) 7.6 is still supported, however references have changed to the new name: Red Hat build of Keycloak. Apicurio Registry Maven plug-in improvements Automatic detection of references in the Maven plug-in by using the autoRef option in the pom.xml file. For more details, see Registry-3439 . This is a Technology Preview feature. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Apicurio Registry user documentation and examples The documentation library has been updated with the new features available in version 2.6: Installing and deploying Red Hat build of Apicurio Registry on OpenShift Migrating Red Hat build of Apicurio Registry deployments Red Hat build of Apicurio Registry User Guide Apicurio Registry v2 core REST API documentation The open source demonstration applications are available from: https://github.com/Apicurio/apicurio-registry/tree/2.6.x/examples 1.4. Apicurio Registry deprecated features Apicurio Registry core deprecated features Confluent Schema Registry API version 6 (compatibility API) : Apicurio Registry currently supports two versions of the Confluent Schema Registry API on separate endpoints: version 6 and version 7. The v6 API endpoint is deprecated, and will be removed in a future release. Ensure that you replace all references to the v6 API endpoint with references to the v7 API endpoint. Apicurio Registry Core API version 1 : Apicurio Registry support for the original version 1 of the Apicurio Registry Core API is now deprecated. This v1 legacy API will be removed in the major release. Dynamic log level configuration : The /admin/loggers and /admin/loggers/{logger} API endpoints are now deprecated in the v2 Apicurio Registry Core API. These endpoints will be removed in a future release. Registry V1 export utility : Apicurio Registry support for the command-line export utility is now deprecated. The export tool, which is used to export data from Apicurio Registry 1.x into a format that can be imported into 2.x, will no longer be released or maintained. All customers should have already upgraded from 1.x to 2.x. Apicurio Registry Operator deprecated features JAVA_OPTIONS environment variable : The JAVA_OPTIONS environment variable is no longer the preferred way to configure Java options for Apicurio Registry. You can use the JAVA_OPTS_APPEND environment variable instead. The JAVA_OPTS environment variable is also available, which replaces the default content of Java options. However, it is best to avoid using JAVA_OPTS because it might interfere with some Apicurio Registry Operator functionality. Retention of environment variables for features that are not enabled : The Apicurio Registry Operator sets environment variables to enable and configure various features, such as Salted Challenge Response Authentication Mechanism (SCRAM) security when using Kafka storage. When such features are disabled, the Operator currently retains the associated environment variables, which can cause problems. Retention of such environment variables is deprecated, and the Operator support for it will be removed. Ensure that your deployment does not rely on the retention of such environment variables. Environment variable precedence : The Apicurio Registry Operator might attempt to set an environment variable that is already explicitly specified in the spec.configuration.env field. If an environment variable has a conflicting value, the value set by the Apicurio Registry Operator takes precedence by default. This behavior will change in the future, to enable users to overwrite most environment variables set by the Operator. Ensure that your deployment does not rely on the original precedence behavior. Apicurio Registry Operator removed features Setting environment variables by editing the Deployment resource : This ability was deprecated in versions, and has been removed from this release. 1.5. Upgrading and migrating Apicurio Registry deployments You can upgrade the Apicurio Registry server automatically from Apicurio Registry 2.x to Apicurio Registry 2.6 on OpenShift. There is no automatic upgrade from Apicurio Registry 1.x to Apicurio Registry 2.x, and a migration process is required. 1.5.1. Updating 2.x client dependencies It is not mandatory to update client dependencies for this release. Existing Apicurio Registry 2.x client applications continue to work with Apicurio Registry 2.6. However, before the release of Apicurio Registry, you must update all of your client dependencies to use the latest version of Apicurio Registry. Client dependencies include dependencies for the Apicurio Registry Kafka serializers/deserializers (SerDes), Maven plug-in, and Java client applications. For example, to update the Maven dependencies for a Java client application, specify the version in your pom.xml file as follows: <dependency> <groupId>io.apicurio</groupId> <artifactId>apicurio-registry-client</artifactId> <version>2.6.8.Final-redhat-00001</version> </dependency> For more details, see Legacy REST API date formats enabled by default . 1.5.2. Upgrading from Apicurio Registry 2.x on OpenShift You can upgrade from Apicurio Registry 2.x on OpenShift 4.11 to Apicurio Registry 2.6 on OpenShift 4.12 or later. You must upgrade both your Apicurio Registry and your OpenShift versions, and upgrade OpenShift one minor version at a time. Prerequisites You already have Apicurio Registry 2.x installed on OpenShift 4.11 or later. You have backed up your existing Apicurio Registry storage data in your Kafka topic or PostgreSQL database. For more details, see Installing and deploying Red Hat build of Apicurio Registry on OpenShift . Important In production environments on OpenShift, to help ensure that storage is backed up before upgrading, it is best to set the Operator update approval strategy for Apicurio Registry to manual instead of automatic. Procedure In the OpenShift Container Platform web console, click Administration and then Cluster Settings . Click the pencil icon to the Channel field, and select the minor candidate version (for example, change from stable-4.11 to candidate-4.12 ). Click Save and then Update , and wait until the upgrade is complete. If the OpenShift version is less than 4.13, repeat steps 2 and 3, and select candidate-4.13 or later. Click Operators > Installed Operators > Red Hat Integration - Service Registry . Ensure that the Update channel is set to 2.x . If the Update approval is set to Automatic , the upgrade should be approved and installed immediately after the 2.x channel is set. If the Update approval is set to Manual , click Install . Wait until the Operator is deployed and the Apicurio Registry pod is deployed. Verify that your Apicurio Registry system is up and running. Additional resources For more details on how to set the Operator update channel in the OpenShift Container Platform web console, see Changing the update channel for an Operator . 1.5.3. Migrating from Apicurio Registry 1.1 on OpenShift For details on migrating from Apicurio Registry 1.1 to Apicurio Registry 2.x, see Migrating Red Hat build of Apicurio Registry deployments . 1.6. Apicurio Registry resolved issues Table 1.1. Resolved issues in Apicurio Registry 2.6.8 Issue Description IPT-1211 GraphQL Artifact auto detection not working Table 1.2. Resolved issues in Apicurio Registry 2.6.6 Issue Description IPT-1180 Apicurio dereferenced schema fail when multiple references in JSON schema IPT-1209 Various issues in Registry 2.6 IPT-1210 Updates and examples for apicurio registry Table 1.3. Resolved issues in Apicurio Registry 2.6.3 Issue Description IPT-1161 Software build reproducibility IPT-1159 Service Registry Operator: https doesn't work for the service registry application after upgrade to 2.6.1 Table 1.4. Resolved issues in Apicurio Registry 2.6.1 Issue Description IPT-1131 The podTemplateSpecPreview (initContainers) defined in the CR are not propagated to the deployment resource. 1.7. Apicurio Registry resolved CVEs The following Common Vulnerabilities and Exposures (CVEs) are resolved in Apicurio Registry 2.6: Table 1.5. CVEs resolved in Apicurio Registry 2.6.8 CVE Description CVE-2019-12900 A data integrity error was found in the Linux Kernel's bzip2 functionality when decompressing. A local user could get unexpected results (or corrupted data) as result of decompressing these files. Table 1.6. CVEs resolved in Apicurio Registry 2.6.6 CVE Description CVE-2024-9287 A vulnerability has been found in the Python venv module and CLI. Path names provided when creating a virtual environment were not quoted properly, allowing the creator to inject commands into virtual environment "activation" scripts. CVE-2024-11168 A flaw was found in Python. The urllib.parse.urlsplit() and urlparse() functions improperly validated bracketed hosts ( [] ), allowing hosts that weren't IPv6 or IPvFuture compliant. Table 1.7. CVEs resolved in Apicurio Registry 2.6.5 CVE Description CVE-2024-47561 A vulnerability was found in Apache Avro that allows an attacker to trigger remote code execution by using the special "java-class" attribute. Table 1.8. CVEs resolved in Apicurio Registry 2.6.3 CVE Description CVE-2024-2398 curl: HTTP/2 push headers memory-leak. Table 1.9. CVEs resolved in Apicurio Registry 2.6.1 CVE Description CVE-2024-2700 A vulnerability in the Quarkus causes a leak of local configuration properties into Quarkus applications. CVE-2024-29041 A flaw in the Express.js framework causes malformed URLs to be evaluated. CVE-2024-29180 A flaw in the webpack-dev-middleware package may lead to file leak. CVE-2023-51775 A vulnerability in the jose4j library allows a denial of service via specially crafted JWE. CVE-2024-22201 A vulnerability in the jetty web server can cause the server to stop accepting new connections from valid clients. 1.8. Apicurio Registry known issues The following known issues apply in Apicurio Registry 2.6: Apicurio Registry core known issues IPT-1143 - Misleading "warning" log message regarding ResultSet resource leak You might see a message similar to the following in the logs: This message is incorrect, as no JDBC resources are leaked. You can safely ignore these messages. Registry-3413 - Legacy REST API date formats enabled by default For maximum compatibility and for easier upgrades from older versions of Apicurio Registry, the date format used in the Apicurio Registry REST API is not compliant with OpenAPI standards. This is because of a bug in older versions. Before the release of Apicurio Registry, you must upgrade all of your client applications to use the latest Apicurio Registry client version. The release will fix the date format bug, which will result in older clients no longer being compatible with the REST API. To update your REST API to be OpenAPI compliant, you can fix the date format bug in this version of Apicurio Registry as follows: Update all of your client applications to version 2.6.8.Final-redhat-00001 , as described in Updating 2.x client dependencies . Set the following environment variable to the value shown: REGISTRY_APIS_V2_DATE_FORMAT=yyyy-MM-dd'T'HH:mm:ss'Z' IPT-814 - Apicurio Registry logout feature incompatible with RH-SSO 7.6 In RH-SSO 7.6, the redirect_uri parameter used with the logout endpoint is deprecated. For more details, see the RH-SSO 7.6 Upgrading Guide . Because of this deprecation, when Apicurio Registry is secured by using the RH-SSO Operator, clicking the Logout button displays the Invalid parameter: redirect_uri error. For a workaround, see https://access.redhat.com/solutions/6980926 . IPT-701 - CVE-2022-23221 H2 allows loading custom classes from remote servers through JNDI When Apicurio Registry data is stored in AMQ Streams, the H2 database console allows remote attackers to execute arbitrary code by using the JDBC URL. Apicurio Registry is not vulnerable by default and a malicious configuration change is required. Apicurio Registry Operator known issues Operator-42 - Autogeneration of OpenShift route might use wrong base host value If multiple routerCanonicalHostname values are specified, autogeneration of the Apicurio Registry OpenShift route might use a wrong base host value. | [
"<dependency> <groupId>io.apicurio</groupId> <artifactId>apicurio-registry-client</artifactId> <version>2.6.8.Final-redhat-00001</version> </dependency>",
"2024-07-24 08:33:53 WARN <> [io.quarkus.agroal.runtime.AgroalEventLoggingListener] (executor-thread-3) Datasource '<default>': JDBC resources leaked: 1 ResultSet(s) and 0 Statement(s)",
"REGISTRY_APIS_V2_DATE_FORMAT=yyyy-MM-dd'T'HH:mm:ss'Z'"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/release_notes_for_apicurio_registry_2.6/registry-relnotes_service-registry |
10.3.2. Cache Limitations With NFS | 10.3.2. Cache Limitations With NFS Opening a file from a shared file system for direct I/O will automatically bypass the cache. This is because this type of access must be direct to the server. Opening a file from a shared file system for writing will not work on NFS version 2 and 3. The protocols of these versions do not provide sufficient coherency management information for the client to detect a concurrent write to the same file from another client. As such, opening a file from a shared file system for either direct I/O or writing will flush the cached copy of the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or writing. Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache directories, symlinks, device files, FIFOs and sockets. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/fscachelimitnfs |
Chapter 6. Updating the OpenShift Data Foundation external secret | Chapter 6. Updating the OpenShift Data Foundation external secret Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation. Note Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.9.X to 4.9.Y. Prerequisites Update the OpenShift Container Platform cluster to the latest stable release of 4.9.z, see Updating Clusters . The OpenShift Container Storage operator has been upgraded to OpenShift Data Foundation version 4.9. See Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation for more information. Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage OpenShift Data foundation Storage Systems tab and then click on the storage system name. On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. Red Hat Ceph Storage must have a Ceph dashboard installed and configured. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. The updated permissions for the user are set as: Run the previously downloaded python script and save the JSON output that gets generated, from the external Red Hat Ceph Storage cluster. Run the previously downloaded python script: --rbd-data-pool-name Is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint Is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> . --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. Note Ensure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port, are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the JSON output generated after running the script in the step. Example output: Upload the generated JSON file. Log in to the OpenShift Web Console. Click Workloads Secrets . Set project to openshift-storage . Click rook-ceph-external-cluster-details . Click Actions (...) Edit Secret . Click Browse and upload the JSON file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage OpenShift Data foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. | [
"oc get csv USD(oc get csv -n openshift-storage | grep ocs-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\\.features\\.ocs\\.openshift\\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= <ocs_client_name>",
"caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=RGW_POOL_PREFIX.rgw.meta, allow r pool=.rgw.root, allow rw pool=RGW_POOL_PREFIX.rgw.control, allow rx pool=RGW_POOL_PREFIX.rgw.log, allow x pool=RGW_POOL_PREFIX.rgw.buckets.index",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/upgrading_to_openshift_data_foundation/updating-the-openshift-data-foundation-external-secret_rhodf |
20.3.3. Using the sftp Command | 20.3.3. Using the sftp Command The sftp utility can be used to open a secure, interactive FTP session. It is similar to ftp except that it uses a secure, encrypted connection. The general syntax is sftp [email protected] . Once authenticated, you can use a set of commands similar to those used by FTP. Refer to the sftp man page for a list of these commands. To read the man page, execute the command man sftp at a shell prompt. The sftp utility is only available in OpenSSH version 2.5.0p1 and higher. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Configuring_an_OpenSSH_Client-Using_the_sftp_Command |
Chapter 29. Managing Streams for Apache Kafka | Chapter 29. Managing Streams for Apache Kafka Managing Streams for Apache Kafka requires performing various tasks to keep the Kafka clusters and associated resources running smoothly. Use oc commands to check the status of resources, configure maintenance windows for rolling updates, and leverage tools such as the Streams for Apache Kafka Drain Cleaner and Kafka Static Quota plugin to manage your deployment effectively. 29.1. Maintenance time windows for rolling updates Maintenance time windows allow you to schedule certain rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time. 29.1.1. Maintenance time windows overview In most cases, the Cluster Operator only updates your Kafka or ZooKeeper clusters in response to changes to the corresponding Kafka resource. This enables you to plan when to apply changes to a Kafka resource to minimize the impact on Kafka client applications. However, some updates to your Kafka and ZooKeeper clusters can happen without any corresponding change to the Kafka resource. For example, the Cluster Operator will need to perform a rolling restart if a CA (certificate authority) certificate that it manages is close to expiry. While a rolling restart of the pods should not affect availability of the service (assuming correct broker and topic configurations), it could affect performance of the Kafka client applications. Maintenance time windows allow you to schedule such spontaneous rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time. If maintenance time windows are not configured for a cluster then it is possible that such spontaneous rolling updates will happen at an inconvenient time, such as during a predictable period of high load. 29.1.2. Maintenance time window definition You configure maintenance time windows by entering an array of strings in the Kafka.spec.maintenanceTimeWindows property. Each string is a cron expression interpreted as being in UTC (Coordinated Universal Time, which for practical purposes is the same as Greenwich Mean Time). The following example configures a single maintenance time window that starts at midnight and ends at 01:59am (UTC), on Sundays, Mondays, Tuesdays, Wednesdays, and Thursdays: # ... maintenanceTimeWindows: - "* * 0-1 ? * SUN,MON,TUE,WED,THU *" # ... In practice, maintenance windows should be set in conjunction with the Kafka.spec.clusterCa.renewalDays and Kafka.spec.clientsCa.renewalDays properties of the Kafka resource, to ensure that the necessary CA certificate renewal can be completed in the configured maintenance time windows. Note Streams for Apache Kafka does not schedule maintenance operations exactly according to the given windows. Instead, for each reconciliation, it checks whether a maintenance window is currently "open". This means that the start of maintenance operations within a given time window can be delayed by up to the Cluster Operator reconciliation interval. Maintenance time windows must therefore be at least this long. 29.1.3. Configuring a maintenance time window You can configure a maintenance time window for rolling updates triggered by supported processes. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Add or edit the maintenanceTimeWindows property in the Kafka resource. For example to allow maintenance between 0800 and 1059 and between 1400 and 1559 you would set the maintenanceTimeWindows as shown below: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... maintenanceTimeWindows: - "* * 8-10 * * ?" - "* * 14-15 * * ?" Create or update the resource: oc apply -f <kafka_configuration_file> Additional resources Section 29.2.1, "Performing a rolling update using a pod management annotation" Section 29.2.2, "Performing a rolling update using a pod annotation" 29.2. Starting rolling updates of Kafka and other operands using annotations Streams for Apache Kafka supports the use of annotations to manually trigger a rolling update of Kafka and other operands through the Cluster Operator. Use annotations to initiate rolling updates of Kafka, Kafka Connect, MirrorMaker 2, and ZooKeeper clusters. Manually performing a rolling update on a specific pod or set of pods is usually only required in exceptional circumstances. However, rather than deleting the pods directly, if you perform the rolling update through the Cluster Operator you ensure the following: The manual deletion of the pod does not conflict with simultaneous Cluster Operator operations, such as deleting other pods in parallel. The Cluster Operator logic handles the Kafka configuration specifications, such as the number of in-sync replicas. 29.2.1. Performing a rolling update using a pod management annotation This procedure describes how to trigger a rolling update of Kafka, Kafka Connect, MirrorMaker 2, or ZooKeeper clusters. To trigger the update, you add an annotation to the StrimziPodSet that manages the pods running on the cluster. Prerequisites To perform a manual rolling update, you need a running Cluster Operator. The cluster for the component you are updating, whether it's Kafka, Kafka Connect, MirrorMaker 2, or ZooKeeper, must also be running. Procedure Find the name of the resource that controls the pods you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding names are my-cluster-kafka and my-cluster-zookeeper . For a Kafka Connect cluster named my-connect-cluster , the corresponding name is my-connect-cluster-connect . And for a MirrorMaker 2 cluster named my-mm2-cluster , the corresponding name is my-mm2-cluster-mirrormaker2 . Use oc annotate to annotate the appropriate resource in OpenShift. Annotating a StrimziPodSet oc annotate strimzipodset <cluster_name>-kafka strimzi.io/manual-rolling-update="true" oc annotate strimzipodset <cluster_name>-zookeeper strimzi.io/manual-rolling-update="true" oc annotate strimzipodset <cluster_name>-connect strimzi.io/manual-rolling-update="true" oc annotate strimzipodset <cluster_name>-mirrormaker2 strimzi.io/manual-rolling-update="true" Wait for the reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated resource is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is automatically removed from the resource. 29.2.2. Performing a rolling update using a pod annotation This procedure describes how to manually trigger a rolling update of existing Kafka, Kafka Connect, MirrorMaker 2, or ZooKeeper clusters using an OpenShift Pod annotation. When multiple pods are annotated, consecutive rolling updates are performed within the same reconciliation run. Prerequisites To perform a manual rolling update, you need a running Cluster Operator. The cluster for the component you are updating, whether it's Kafka, Kafka Connect, MirrorMaker 2, or ZooKeeper, must also be running. You can perform a rolling update on a Kafka cluster regardless of the topic replication factor used. But for Kafka to stay operational during the update, you'll need the following: A highly available Kafka cluster deployment running with nodes that you wish to update. Topics replicated for high availability. Topic configuration specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor. Kafka topic replicated for high availability apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ... Procedure Find the name of the Pod you want to manually update. Pod naming conventions are as follows: <cluster_name>-kafka-<index_number> for a Kafka cluster <cluster_name>-zookeeper-<index_number> for a ZooKeeper cluster <cluster_name>-connect-<index_number> for a Kafka Connect cluster <cluster_name>-mirrormaker2-<index_number> for a MirrorMaker 2 cluster The <index_number> assigned to a pod starts at zero and ends at the total number of replicas minus one. Use oc annotate to annotate the Pod resource in OpenShift: oc annotate pod <cluster_name>-kafka-<index_number> strimzi.io/manual-rolling-update="true" oc annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/manual-rolling-update="true" oc annotate pod <cluster_name>-connect-<index_number> strimzi.io/manual-rolling-update="true" oc annotate pod <cluster_name>-mirrormaker2-<index_number> strimzi.io/manual-rolling-update="true" Wait for the reconciliation to occur (every two minutes by default). A rolling update of the annotated Pod is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of a pod is complete, the annotation is automatically removed from the Pod . 29.3. Recovering a cluster from persistent volumes You can recover a Kafka cluster from persistent volumes (PVs) if they are still present. You might want to do this, for example, after: A namespace was deleted unintentionally A whole OpenShift cluster is lost, but the PVs remain in the infrastructure 29.3.1. Recovery from namespace deletion Recovery from namespace deletion is possible because of the relationship between persistent volumes and namespaces. A PersistentVolume (PV) is a storage resource that lives outside of a namespace. A PV is mounted into a Kafka pod using a PersistentVolumeClaim (PVC), which lives inside a namespace. The reclaim policy for a PV tells a cluster how to act when a namespace is deleted. If the reclaim policy is set as: Delete (default), PVs are deleted when PVCs are deleted within a namespace Retain , PVs are not deleted when a namespace is deleted To ensure that you can recover from a PV if a namespace is deleted unintentionally, the policy must be reset from Delete to Retain in the PV specification using the persistentVolumeReclaimPolicy property: apiVersion: v1 kind: PersistentVolume # ... spec: # ... persistentVolumeReclaimPolicy: Retain Alternatively, PVs can inherit the reclaim policy of an associated storage class. Storage classes are used for dynamic volume allocation. By configuring the reclaimPolicy property for the storage class, PVs that use the storage class are created with the appropriate reclaim policy. The storage class is configured for the PV using the storageClassName property. apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # ... # ... reclaimPolicy: Retain apiVersion: v1 kind: PersistentVolume # ... spec: # ... storageClassName: gp2-retain Note If you are using Retain as the reclaim policy, but you want to delete an entire cluster, you need to delete the PVs manually. Otherwise they will not be deleted, and may cause unnecessary expenditure on resources. 29.3.2. Recovery from loss of an OpenShift cluster When a cluster is lost, you can use the data from disks/volumes to recover the cluster if they were preserved within the infrastructure. The recovery procedure is the same as with namespace deletion, assuming PVs can be recovered and they were created manually. 29.3.3. Recovering a deleted cluster from persistent volumes This procedure describes how to recover a deleted cluster from persistent volumes (PVs). In this situation, the Topic Operator identifies that topics exist in Kafka, but the KafkaTopic resources do not exist. When you get to the step to recreate your cluster, you have two options: Use Option 1 when you can recover all KafkaTopic resources. The KafkaTopic resources must therefore be recovered before the cluster is started so that the corresponding topics are not deleted by the Topic Operator. Use Option 2 when you are unable to recover all KafkaTopic resources. In this case, you deploy your cluster without the Topic Operator, delete the Topic Operator topic store metadata, and then redeploy the Kafka cluster with the Topic Operator so it can recreate the KafkaTopic resources from the corresponding topics. Note If the Topic Operator is not deployed, you only need to recover the PersistentVolumeClaim (PVC) resources. Before you begin In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A volumeName is specified for the PVC and this must match the name of the PV. For more information, see Persistent storage . Note The procedure does not include recovery of KafkaUser resources, which must be recreated manually. If passwords and certificates need to be retained, secrets must be recreated before creating the KafkaUser resources. Procedure Check information on the PVs in the cluster: oc get pv Information is presented for PVs with data. Example output showing columns important to this procedure: NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2 NAME shows the name of each PV. RECLAIM POLICY shows that PVs are retained . CLAIM shows the link to the original PVCs. Recreate the original namespace: oc create namespace myproject Recreate the original PVC resource specifications, linking the PVCs to the appropriate PV: For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c Edit the PV specifications to delete the claimRef properties that bound the original PVC. For example: apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: "<date>" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: "39431" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem In the example, the following properties are deleted: claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea Deploy the Cluster Operator. oc create -f install/cluster-operator -n my-project Recreate your cluster. Follow the steps depending on whether or not you have all the KafkaTopic resources needed to recreate your cluster. Option 1 : If you have all the KafkaTopic resources that existed before you lost your cluster, including internal topics such as committed offsets from __consumer_offsets : Recreate all KafkaTopic resources. It is essential that you recreate the resources before deploying the cluster, or the Topic Operator will delete the topics. Deploy the Kafka cluster. For example: oc apply -f kafka.yaml Option 2 : If you do not have all the KafkaTopic resources that existed before you lost your cluster: Deploy the Kafka cluster, as with the first option, but without the Topic Operator by removing the topicOperator property from the Kafka resource before deploying. If you include the Topic Operator in the deployment, the Topic Operator will delete all the topics. Delete the internal topic store topics from the Kafka cluster: oc run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete The command must correspond to the type of listener and authentication used to access the Kafka cluster. Enable the Topic Operator by redeploying the Kafka cluster with the topicOperator property to recreate the KafkaTopic resources. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} 1 #... 1 Here we show the default configuration, which has no additional properties. You specify the required configuration using the properties described in the EntityTopicOperatorSpec schema reference . Verify the recovery by listing the KafkaTopic resources: oc get KafkaTopic 29.4. Frequently asked questions 29.4.1. Questions related to the Cluster Operator 29.4.1.1. Why do I need cluster administrator privileges to install Streams for Apache Kafka? To install Streams for Apache Kafka, you need to be able to create the following cluster-scoped resources: Custom Resource Definitions (CRDs) to instruct OpenShift about resources that are specific to Streams for Apache Kafka, such as Kafka and KafkaConnect ClusterRoles and ClusterRoleBindings Cluster-scoped resources, which are not scoped to a particular OpenShift namespace, typically require cluster administrator privileges to install. As a cluster administrator, you can inspect all the resources being installed (in the /install/ directory) to ensure that the ClusterRoles do not grant unnecessary privileges. After installation, the Cluster Operator runs as a regular Deployment , so any standard (non-admin) OpenShift user with privileges to access the Deployment can configure it. The cluster administrator can grant standard users the privileges necessary to manage Kafka custom resources. See also: Why does the Cluster Operator need to create ClusterRoleBindings ? Can standard OpenShift users create Kafka custom resources? 29.4.1.2. Why does the Cluster Operator need to create ClusterRoleBindings ? OpenShift has built-in privilege escalation prevention , which means that the Cluster Operator cannot grant privileges it does not have itself, specifically, it cannot grant such privileges in a namespace it cannot access. Therefore, the Cluster Operator must have the privileges necessary for all the components it orchestrates. The Cluster Operator needs to be able to grant access so that: The Topic Operator can manage KafkaTopics , by creating Roles and RoleBindings in the namespace that the operator runs in The User Operator can manage KafkaUsers , by creating Roles and RoleBindings in the namespace that the operator runs in The failure domain of a Node is discovered by Streams for Apache Kafka, by creating a ClusterRoleBinding When using rack-aware partition assignment, the broker pod needs to be able to get information about the Node it is running on, for example, the Availability Zone in Amazon AWS. A Node is a cluster-scoped resource, so access to it can only be granted through a ClusterRoleBinding , not a namespace-scoped RoleBinding . 29.4.1.3. Can standard OpenShift users create Kafka custom resources? By default, standard OpenShift users will not have the privileges necessary to manage the custom resources handled by the Cluster Operator. The cluster administrator can grant a user the necessary privileges using OpenShift RBAC resources. For more information, see Section 4.6, "Designating Streams for Apache Kafka administrators" . 29.4.1.4. What do the failed to acquire lock warnings in the log mean? For each cluster, the Cluster Operator executes only one operation at a time. The Cluster Operator uses locks to make sure that there are never two parallel operations running for the same cluster. Other operations must wait until the current operation completes before the lock is released. INFO Examples of cluster operations include cluster creation , rolling update , scale down , and scale up . If the waiting time for the lock takes too long, the operation times out and the following warning message is printed to the log: 2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster Depending on the exact configuration of STRIMZI_FULL_RECONCILIATION_INTERVAL_MS and STRIMZI_OPERATION_TIMEOUT_MS , this warning message might appear occasionally without indicating any underlying issues. Operations that time out are picked up in the periodic reconciliation, so that the operation can acquire the lock and execute again. Should this message appear periodically, even in situations when there should be no other operations running for a given cluster, it might indicate that the lock was not properly released due to an error. If this is the case, try restarting the Cluster Operator. 29.4.1.5. Why is hostname verification failing when connecting to NodePorts using TLS? Currently, off-cluster access using NodePorts with TLS encryption enabled does not support TLS hostname verification. As a result, the clients that verify the hostname will fail to connect. For example, the Java client will fail with the following exception: Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more To connect, you must disable hostname verification. In the Java client, you can do this by setting the configuration option ssl.endpoint.identification.algorithm to an empty string. When configuring the client using a properties file, you can do it this way: ssl.endpoint.identification.algorithm= When configuring the client directly in Java, set the configuration option to an empty string: props.put("ssl.endpoint.identification.algorithm", ""); | [
"maintenanceTimeWindows: - \"* * 0-1 ? * SUN,MON,TUE,WED,THU *\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # maintenanceTimeWindows: - \"* * 8-10 * * ?\" - \"* * 14-15 * * ?\"",
"apply -f <kafka_configuration_file>",
"annotate strimzipodset <cluster_name>-kafka strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-zookeeper strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-connect strimzi.io/manual-rolling-update=\"true\" annotate strimzipodset <cluster_name>-mirrormaker2 strimzi.io/manual-rolling-update=\"true\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"annotate pod <cluster_name>-kafka-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-zookeeper-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-connect-<index_number> strimzi.io/manual-rolling-update=\"true\" annotate pod <cluster_name>-mirrormaker2-<index_number> strimzi.io/manual-rolling-update=\"true\"",
"apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain",
"apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain",
"apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain",
"get pv",
"NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2",
"create namespace myproject",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c",
"apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem",
"claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea",
"create -f install/cluster-operator -n my-project",
"apply -f kafka.yaml",
"run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} 1 #",
"get KafkaTopic",
"2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster",
"Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more",
"ssl.endpoint.identification.algorithm=",
"props.put(\"ssl.endpoint.identification.algorithm\", \"\");"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/management-tasks-str |
14.3. User Sessions | 14.3. User Sessions 14.3.1. What Are Typical Processes in User Sessions? In a stock GNOME session, programs called daemons run on the system as background processes. You should find the following daemons running by default: dbus-daemon The dbus-daemon provides a message bus daemon which programs can use to exchange messages with one another. dbus-daemon is implemented with the D-Bus library which provides one-to-one communication between any two applications. For extended information, see the dbus-daemon (1) man page. gnome-keyring-daemon Credentials such as user name and password for various programs and websites are stored securely using the gnome-keyring-daemon . This information is written into an encrypted file called the keyring file and saved in the user's home directory. For extended information, see the gnome-keyring-daemon (1) man page. gnome-session The gnome-session program is responsible for running the GNOME Desktop environment with help of a display manager, such as GDM . The default session for the user is set at the time of system installation by the system administrator. gnome-session typically loads the last session that ran successfully on the system. For extended information, see the gnome-session (1) man page. gnome-settings-daemon The gnome-settings-daemon handles settings for a GNOME session and for all programs that are run within the session. For extended information, see the gnome-settings-daemon (1) man page. gnome-shell gnome-shell provides the core user interface functionality for GNOME, such as launching programs, browsing directories, viewing files and so on. For extended information, see the gnome-shell (1) man page. pulseaudio PulseAudio is a sound server for Red Hat Enterprise Linux that lets programs output audio using the Pulseaudio daemon. For extended information, see the pulseaudio (1) man page. Depending on the user's setup, you may also see some of the following, among others: dconf-service ibus at-spi2-dbus-launcher at-spi2-registryd gnome-shell-calendar-server goa-daemon gsd-printer various Evolution factory processes various GVFS processes 14.3.2. Configuring a User Default Session The default session is retrieved from a program called AccountsService . AccountsService stores this information in the /var/lib/AccountsService/users/ directory. Note In GNOME 2, the .dmrc file in the user home directory was used to create default sessions. This .dmrc file is no longer used. Procedure 14.5. Specifying a Default Session for a User Make sure that you have the gnome-session-xsession package installed by running the following command: Navigate to the /usr/share/xsessions directory where you can find .desktop files for each of the available sessions. Consult the contents of the .desktop files to determine the session you want to use. To specify a default session for a user, update the user's account service in the /var/lib/AccountsService/users/ username file : In this sample, GNOME has been set as the default session, using the /usr/share/xsessions/gnome.desktop file. Note that the system default in Red Hat Enterprise Linux 7 is GNOME Classic (the /usr/share/xsessions/gnome-classic.desktop file). After specifying a default session for the user, that session will be used the time the user logs in, unless the user selects a different session from the login screen. 14.3.3. Creating a Custom Session To create your own session with customized configuration, follow these steps: Create a .desktop file in /etc/X11/sessions/ new-session .desktop . Make sure that the file specifies the following entries: The Exec entry specifies the command, possibly with arguments, to execute. You can run the custom session with the gnome-session --session= new-session command. For more information on the parameters that you can use with gnome-session , see the gnome-session (1) man page. Create a custom session file in /usr/share/gnome-session/sessions/ new-session .session where you can specify the name and required components for the session: Note that any item that you specify in RequiredComponents needs to have its corresponding .desktop file in /usr/share/applications/ . After configuring the custom session files, the new session will be available in the session list on the GDM login screen. 14.3.4. Viewing User Session Logs If you want to find more information about a problem in a user session, you can view the systemd journal. Because Red Hat Enterprise Linux 7 is a systemd -based system, the user session log data is stored directly in the systemd journal in a binary format. Note In Red Hat Enterprise Linux 6, the user session log data was stored in the ~/.xsession-errors file, which is no longer used. Procedure 14.6. Viewing User Session Logs Determine your user ID ( uid ) by running the following command: View the journal logs for the user ID determined above: Getting More Information The journalctl (1) man page provides more information on the systemd journal usage. For further information about using the systemd journal on Red Hat Enterprise Linux 7, see the Red Hat Enterprise Linux 7 System-Level Authentication Guide. 14.3.5. Adding an Autostart Application for All Users To start an application automatically when the user logs in, you need to create a .desktop file for that application in the /etc/xdg/autostart/ directory. To manage autostart (startup) applications for individual users, use the gnome-session-properties application. Procedure 14.7. Adding an Autostart (Startup) Application for All Users Create a .desktop file in the /etc/xdg/autostart/ directory: Replace Files with the name of the application. Replace nautilus -n with the command you wish to use to run the application. You can use the AutostartCondition key to check for a value of a GSettings key. The session manager runs the application automatically if the key's value is true. If the key's value changes in the running session, the session manager starts or stops the application, depending on what the value for the key was. 14.3.6. Configuring Automatic Login A user with an Administrator account type can enable Automatic Login from the Users panel in the GNOME Settings . System administrators can also set up automatic login manually in the GDM custom configuration file, as follows. Example 14.1. Configuring Automatic Login for a user john Edit the /etc/gdm/custom.conf file and make sure that the [daemon] section in the file specifies the following: Replace john with the user that you want to be automatically logged in. 14.3.7. Configuring Automatic Logout User sessions that have been idle for a specific period of time can be ended automatically. You can set different behavior based on whether the machine is running from a battery or from mains power by setting the corresponding GSettings key, then locking it. Warning Keep in mind that users can potentially lose unsaved data if an idle session is automatically ended. Procedure 14.8. Setting Automatic Logout for a Mains Powered Machine Create a local database for machine-wide settings in /etc/dconf/db/local.d/00-autologout : Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/autologout : Update the system databases: Users must log out and back in again before the system-wide settings take effect. The following GSettings keys are of interest: org.gnome.settings-daemon.plugins.power.sleep-inactive-ac-timeout The number of seconds that the computer needs to be inactive before it goes to sleep if it is running from AC power. org.gnome.settings-daemon.plugins.power.sleep-inactive-ac-type What should happen when the timeout has passed if the computer is running from AC power. org.gnome.settings-daemon.plugins.power.sleep-inactive-battery-timeout The number of seconds that the computer needs to be inactive before it goes to sleep if it is running from power. org.gnome.settings-daemon.plugins.power.sleep-inactive-battery-type What should happen when the timeout has passed if the computer is running from battery power. You can run the gsettings range command on a key for a list of values which you can use. For example: 14.3.8. Setting Screen Brightness and Idle Time By setting the following GSettings keys, you can configure the drop in the brightness level, and set brightness level and idle time. Example 14.2. Setting the Drop in the Brightness Level To set the drop in the brightness level when the device has been idle for some time, create a local database for machine-wide settings in /etc/dconf/db/local.d/00-power , as in the following example: Example 14.3. Setting Brightness Level To change the brightness level, create a local database for machine-wide settings in /etc/dconf/db/local.d/00-power , as in the following example, and replace 30 with the integer value you want to use: Example 14.4. Setting Idle Time To set the idle time after which the screen must be blanked and the default screensaver displayed, create a local database for machine-wide settings in /etc/dconf/db/local.d/00-session , as in the following example, and replace 900 with the integer value you want to use: You must include the uint32 along with the integer value as shown. Incorporate your changes into the system databases by running the dconf update command as root. Users must log out and back in again before the system-wide settings take effect. Note You can also lock down the above settings to prevent users from changing them. For more information about locks, see Section 9.5.1, "Locking Down Specific Settings" . 14.3.9. Locking the Screen When the User Is Idle If you want to enable the screensaver and make the screen lock automatically when the user is idle, you need to create a dconf profile, set the GSettings key pairs and then lock it to prevent users from editing it. Procedure 14.9. Enabling the Screensaver and Locking the Screen Create a local database for system-wide settings in /etc/dconf/db/local.d/00-screensaver : You must include the uint32 along with the integer key values as shown. Override the user's setting and prevent the user from changing it in the /etc/dconf/db/local.d/locks/screensaver file: Update the system databases: Users must log out and back in again before the system-wide settings take effect. 14.3.10. Screencast Recording GNOME Shell features a built-in screencast recorder that allows users to record desktop or application activity during their session and distribute the recordings as high-resolution video files in the webm format. Procedure 14.10. Making a Screencast To start the recording, press Ctrl + Alt + Shift + R . When the recorder is capturing the screen activity, it displays a red circle in the bottom-right corner of the screen. To stop the recording, press Ctrl + Alt + Shift + R . The red circle in the bottom-right corner of the screen disappears. Navigate to the ~/Videos folder where you can find the recorded video with a file name that starts with Screencast and includes the date and time of the recording. Note that the built-in recorder always captures the entire screen, including all monitors in multi-monitor setups. | [
"yum install gnome-session-xsession",
"[User] Language= XSession=gnome",
"[Desktop Entry] Encoding=UTF-8 Type=Application Name= Custom Session Comment= This is our custom session Exec= gnome-session --session=new-session",
"[GNOME Session] Name= Custom Session RequiredComponents= gnome-shell-classic;gnome-settings-daemon;",
"id --user 1000",
"journalctl _UID=1000",
"[Desktop Entry] Type=Application Name= Files Exec= nautilus -n OnlyShowIn=GNOME; AutostartCondition= GSettings org.gnome.desktop.background show-desktop-icons",
"[daemon] AutomaticLoginEnable= True AutomaticLogin= john",
"Set the timeout to 900 seconds when on mains power sleep-inactive-ac-timeout= 900 Set action after timeout to be logout when on mains power sleep-inactive-ac-type=' logout '",
"Lock automatic logout settings /org/gnome/settings-daemon/plugins/power/sleep-inactive-ac-timeout /org/gnome/settings-daemon/plugins/power/sleep-inactive-ac-type",
"dconf update",
"gsettings range org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type enum 'blank' 'suspend' 'shutdown' 'hibernate' 'interactive' 'nothing' 'logout'",
"[org/gnome/settings-daemon/plugins/power] idle-dim= true",
"[org/gnome/settings-daemon/plugins/power] idle-brightness= 30",
"[org/gnome/desktop/session] idle-delay=uint32 900",
"Set the lock time out to 180 seconds before the session is considered idle idle-delay=uint32 180 Set this to true to lock the screen when the screensaver activates lock-enabled= true Set the lock timeout to 180 seconds after the screensaver has been activated lock-delay=uint32 180",
"Lock desktop screensaver settings /org/gnome/desktop/session/idle-delay /org/gnome/desktop/screensaver/lock-enabled /org/gnome/desktop/screensaver/lock-delay",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/user-sessions |
Chapter 2. Setting up the environment for an OpenShift Container Platform installation | Chapter 2. Setting up the environment for an OpenShift Container Platform installation 2.1. Preparing the provisioner node on IBM Cloud Bare Metal (Classic) infrastructure Perform the following steps to prepare the provisioner node. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhel-8-for-x86_64-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt kni Start firewalld : USD sudo systemctl start firewalld Enable firewalld : USD sudo systemctl enable firewalld Start the http service: USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Set the ID of the provisioner node: USD PRVN_HOST_ID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl hardware list Set the ID of the public subnet: USD PUBLICSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the ID of the private subnet: USD PRIVSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the provisioner node public IP address: USD PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r) Set the CIDR for the public network: USD PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the public network: USD PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR Set the gateway for the public network: USD PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r) Set the private IP address of the provisioner node: USD PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | \ jq .primaryBackendIpAddress -r) Set the CIDR for the private network: USD PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the private network: USD PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR Set the gateway for the private network: USD PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r) Set up the bridges for the baremetal and provisioning networks: USD sudo nohup bash -c " nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \"10.0.0.0/8 USDPRIV_GATEWAY\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 " Note For eth1 and eth2 , substitute the appropriate interface name, as needed. If required, SSH back into the provisioner node: # ssh kni@provisioner.<cluster-name>.<domain> Verify the connection bridges have been properly created: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2 Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install on Bare Metal with user-provisioned infrastructure . In step 1, click Download pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 2.2. Configuring the public subnet All of the OpenShift Container Platform cluster nodes must be on the public subnet. IBM Cloud(R) Bare Metal (Classic) does not provide a DHCP server on the subnet. Set it up separately on the provisioner node. You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set. Procedure Install dnsmasq : USD sudo dnf install dnsmasq Open the dnsmasq configuration file: USD sudo vi /etc/dnsmasq.conf Add the following configuration to the dnsmasq configuration file: interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile 1 Set the DHCP range. Replace both instances of <ip_addr> with one unused IP address from the public subnet so that the dhcp-range for the baremetal network begins and ends with the same the IP address. Replace <pub_cidr> with the CIDR of the public subnet. 2 Set the DHCP option. Replace <pub_gateway> with the IP address of the gateway for the baremetal network. Replace <prvn_priv_ip> with the IP address of the provisioner node's private IP address on the provisioning network. Replace <prvn_pub_ip> with the IP address of the provisioner node's public IP address on the baremetal network. To retrieve the value for <pub_cidr> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <pub_gateway> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <prvn_priv_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | \ jq .primaryBackendIpAddress -r Replace <id> with the ID of the provisioner node. To retrieve the value for <prvn_pub_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r Replace <id> with the ID of the provisioner node. Obtain the list of hardware for the cluster: USD ibmcloud sl hardware list Obtain the MAC addresses and IP addresses for each node: USD ibmcloud sl hardware detail <id> --output JSON | \ jq '.networkComponents[] | \ "\(.primaryIpAddress) \(.macAddress)"' | grep -v null Replace <id> with the ID of the node. Example output "10.196.130.144 00:e0:ed:6a:ca:b4" "141.125.65.215 00:e0:ed:6a:ca:b5" Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the install-config.yaml file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the public baremetal network, and the MAC addresses of the private provisioning network. Add the MAC and IP address pair of the public baremetal network for each node into the dnsmasq.hostsfile file: USD sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile Example input 00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1 ... Replace <mac>,<ip> with the public MAC address and public IP address of the corresponding node name. Start dnsmasq : USD sudo systemctl start dnsmasq Enable dnsmasq so that it starts when booting the node: USD sudo systemctl enable dnsmasq Verify dnsmasq is running: USD sudo systemctl status dnsmasq Example output ● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k Open ports 53 and 67 with UDP protocol: USD sudo firewall-cmd --add-port 53/udp --permanent USD sudo firewall-cmd --add-port 67/udp --permanent Add provisioning to the external zone with masquerade: USD sudo firewall-cmd --change-zone=provisioning --zone=external --permanent This step ensures network address translation for IPMI calls to the management subnet. Reload the firewalld configuration: USD sudo firewall-cmd --reload 2.3. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.12 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 2.4. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 2.5. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud(R) Bare Metal (Classic) hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud Bare Metal (Classic) is that you must explicitly set the privilege level for IPMI in the BMC section of the install-config.yaml file. Procedure Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey . apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: "/dev/sda" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: "/dev/sda" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 3 The bmc.address provides a privilegelevel configuration setting with the value set to OPERATOR . This is required for IBM Cloud Bare Metal (Classic) infrastructure. 2 4 Add the MAC address of the private provisioning network NIC for the corresponding node. Note You can use the ibmcloud command-line utility to retrieve the password. USD ibmcloud sl hardware detail <id> --output JSON | \ jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"' Replace <id> with the ID of the node. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file into the directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 2.6. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 2.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 2.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 2.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 2.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 2.4. Subfields Subfield Description deviceName A string containing a Linux device name like /dev/vda . The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 2.8. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 2.9. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 2.10. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt kni",
"sudo systemctl start firewalld",
"sudo systemctl enable firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"PRVN_HOST_ID=<ID>",
"ibmcloud sl hardware list",
"PUBLICSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRIVSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)",
"PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)",
"PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR",
"PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)",
"PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)",
"PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)",
"PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR",
"PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)",
"sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2",
"vim pull-secret.txt",
"sudo dnf install dnsmasq",
"sudo vi /etc/dnsmasq.conf",
"interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r",
"ibmcloud sl hardware list",
"ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null",
"\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"",
"sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile",
"00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1",
"sudo systemctl start dnsmasq",
"sudo systemctl enable dnsmasq",
"sudo systemctl status dnsmasq",
"● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k",
"sudo firewall-cmd --add-port 53/udp --permanent",
"sudo firewall-cmd --add-port 67/udp --permanent",
"sudo firewall-cmd --change-zone=provisioning --zone=external --permanent",
"sudo firewall-cmd --reload",
"export VERSION=stable-4.12",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_cloud_bare_metal_classic/install-ibm-cloud-installation-workflow |
Chapter 2. Installer-provisioned infrastructure | Chapter 2. Installer-provisioned infrastructure 2.1. vSphere installation requirements Before you begin an installation using installer-provisioned infrastructure, be sure that your vSphere environment meets the following installation requirements. 2.1.1. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Both of these releases support Container Storage Interface (CSI) migration, which is enabled by default on OpenShift Container Platform 4.15. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following tables: Table 2.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Table 2.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift Container Platform version 4.13 and later are based on the RHEL 9.2 host operating system, which raised the microarchitecture requirements to x86-64-v2. See Architectures in the RHEL documentation. Important To ensure the best performance conditions for your cluster workloads that operate on Oracle(R) Cloud Infrastructure (OCI) and on the Oracle(R) Cloud VMware Solution (OCVS) service, ensure volume performance units (VPUs) for your block volume are sized for your workloads. The following list provides some guidance in selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Base-production environment: 500 GB, and 60 VPUs. Heavy-use production environment: More than 500 GB, and 100 or more VPUs. Consider allocating additional VPUs to give enough capacity for updates and scaling activities. See Block Volume Performance Levels (Oracle documentation) . 2.1.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 2.1.3. VMware vSphere CSI Driver Operator requirements To install the vSphere Container Storage Interface (CSI) Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. You can create a custom role for the Container Storage Interface (CSI) driver, the vSphere CSI Driver Operator, and the vSphere Problem Detector Operator. The custom role can include privilege sets that assign a minimum set of permissions to each vSphere object. This means that the CSI driver, the vSphere CSI Driver Operator, and the vSphere Problem Detector Operator can establish a basic interaction with these objects. Important Installing an OpenShift Container Platform cluster in a vCenter is tested against a full list of privileges as described in the "Required vCenter account privileges" section. By adhering to the full list of privileges, you can reduce the possibility of unexpected and unsupported behaviors that might occur when creating a custom role with a set of restricted privileges. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . Minimum permissions for the storage components 2.1.4. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 2.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename Host.Config.Storage VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. See the "Minimum permissions for the Machine API" table. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 2.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 2.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Minimum required vCenter account privileges After you create a custom role and assign privileges to it, you can create permissions by selecting specific vSphere objects and then assigning the custom role to a user or group for each object. Before you create permissions or request for the creation of permissions for a vSphere object, determine what minimum permissions apply to the vSphere object. By doing this task, you can ensure a basic interaction exists between a vSphere object and OpenShift Container Platform architecture. Important If you create a custom role and you do not assign privileges to it, the vSphere Server by default assigns a Read Only role to the custom role. Note that for the cloud provider API, the custom role only needs to inherit the privileges of the Read Only role. Consider creating a custom role when an account with global administrative privileges does not meet your needs. Important Accounts that are not configured with the required privileges are unsupported. Installing an OpenShift Container Platform cluster in a vCenter is tested against a full list of privileges as described in the "Required vCenter account privileges" section. By adhering to the full list of privileges, you can reduce the possibility of unexpected behaviors that might occur when creating a custom role with a restricted set of privileges. The following tables list the minimum permissions for a vSphere object that interacts with specific OpenShift Container Platform architecture. Example 2.4. Minimum permissions on installer-provisioned infrastructure vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If you intend to create VMs in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If you provide an existing resource pool in the install-config.yaml file Datastore.Browse Datastore.FileManagement Host.Config.Storage InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import`minimum vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. If your cluster does use the Machine API and you want to set the minimum set of permissions for the API, see the "Minimum permissions for the Machine API" table. Folder.Create Folder.Delete InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Example 2.5. Minimum permissions for post-installation management of components vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If you intend to create VMs in the cluster root Host.Config.Storage Resource.AssignVMToPool vSphere vCenter Resource Pool If you provide an existing resource pool in the install-config.yaml file Host.Config.Storage vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. If your cluster does use the Machine API and you want to set the minimum set of permissions for the API, see the "Minimum permissions for the Machine API" table. Resource.AssignVMToPool VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Provisioning.DeployTemplate Example 2.6. Minimum permissions for the storage components vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If you intend to create VMs in the cluster root Host.Config.Storage vSphere vCenter Resource Pool If you provide an existing resource pool in the install-config.yaml file Host.Config.Storage vSphere Datastore Always Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Read Only Virtual Machine Folder Always VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. If your cluster does use the Machine API and you want to set the minimum set of permissions for the API, see the "Minimum permissions for the Machine API" table. VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice Example 2.7. Minimum permissions for the Machine API vSphere object for role When required Required privileges vSphere vCenter Always InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If you intend to create VMs in the cluster root Resource.AssignVMToPool vSphere vCenter Resource Pool If you provide an existing resource pool in the install-config.yaml file Read Only vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. Resource.AssignVMToPool VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Provisioning.DeployTemplate Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses For a network that uses DHCP, an installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME (Canonical Name) record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Static IP addresses for vSphere nodes You can provision bootstrap, control plane, and compute nodes to be configured with static IP addresses in environments where Dynamic Host Configuration Protocol (DHCP) does not exist. To configure this environment, you must provide values to the platform.vsphere.hosts.role parameter in the install-config.yaml file. Important Static IP addresses for vSphere nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, the installation program is configured to use the DHCP for the network, but this network has limited configurable capabilities. After you define one or more machine pools in your install-config.yaml file, you can define network definitions for nodes on your network. Ensure that the number of network definitions matches the number of machine pools that you configured for your cluster. Example network configuration that specifies different roles # ... platform: vsphere: hosts: - role: bootstrap 1 networkDevice: ipAddrs: - 192.168.204.10/24 2 gateway: 192.168.204.1 3 nameservers: 4 - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.11/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.12/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.13/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: compute networkDevice: ipAddrs: - 192.168.204.14/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 # ... 1 Valid network definition values include bootstrap , control-plane , and compute . You must list at least one bootstrap network definition in your install-config.yaml configuration file. 2 Lists IPv4, IPv6, or both IP addresses that the installation program passes to the network interface. The machine API controller assigns all configured IP addresses to the default network interface. 3 The default gateway for the network interface. 4 Lists up to 3 DNS nameservers. Important To enable the Technology Preview feature of static IP addresses for vSphere nodes for your cluster, you must include featureSet:TechPreviewNoUpgrade as the initial entry in the install-config.yaml file. After you deployed your cluster to run nodes with static IP addresses, you can scale a machine to use one of these static IP addresses. Additionally, you can use a machine set to configure a machine to use one of the configured static IP addresses. Additional resources Scaling machines to use static IP addresses Using a machine set to scale machines with configured static IP addresses 2.2. Preparing to install a cluster using installer-provisioned infrastructure You prepare to install an OpenShift Container Platform cluster on vSphere by completing the following steps: Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Adding your vCenter's trusted root CA certificates to your system trust. 2.2.1. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.15 release notes document. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.2.2. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.2.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.2.4. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 2.3. Installing a cluster on vSphere In OpenShift Container Platform version 4.15, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.3.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3.3. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Important Some VMware vCenter Single Sign-On (SSO) environments with Active Directory (AD) integration might primarily require you to use the traditional login method, which requires the <domain>\ construct. To ensure that vCenter account permission checks complete properly, consider using the User Principal Name (UPN) login method, such as <username>@<fully_qualified_domainname> . Select the data center in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from Red Hat OpenShift Cluster Manager . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.3.4. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.3.5. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.3.5.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.3.5.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.3.5.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.3.5.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.3.6. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.3.7. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 2.4. Installing a cluster on vSphere with customizations In OpenShift Container Platform version 4.15, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.4.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.4.3. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 2.4.4. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Warning You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. You cannot specify more than one datastore path. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on vSphere". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters 2.4.4.1. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 7 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 5 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 6 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 7 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 8 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 9 The vSphere disk provisioning method. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 2.4.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.4.4.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 2.4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.4.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.4.7. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.4.7.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.4.7.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.4.7.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.4.7.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.4.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.4.9. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 2.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 2.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 2.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 2.4.9.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 2.4.10. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 2.5. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.15, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.5.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, confirm with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.5.3. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 2.5.4. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Warning You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. You cannot specify more than one datastore path. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters 2.5.4.1. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 8 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 9 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 6 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 9 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 10 The vSphere disk provisioning method. 5 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 2.5.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.5.4.3. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 2.5.4.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 2.5.5. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 2.5.6. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 2.5.6.1. Specifying multiple subnets for your network Before you install an OpenShift Container Platform cluster on a vSphere host, you can specify multiple subnets for a networking implementation so that the vSphere cloud controller manager (CCM) can select the appropriate subnet for a given networking situation. vSphere can use the subnet for managing pods and services on your cluster. For this configuration, you must specify internal and external Classless Inter-Domain Routing (CIDR) implementations in the vSphere CCM configuration. Each CIDR implementation lists an IP address range that the CCM uses to decide what subnets interact with traffic from internal and external networks. Important Failure to configure internal and external CIDR implementations in the vSphere CCM configuration can cause the vSphere CCM to select the wrong subnet. This situation causes the following error: This configuration can cause new nodes that associate with a MachineSet object with a single subnet to become unusable as each new node receives the node.cloudprovider.kubernetes.io/uninitialized taint. These situations can cause communication issues with the Kubernetes API server that can cause installation of the cluster to fail. Prerequisites You created Kubernetes manifest files for your OpenShift Container Platform cluster. Procedure From the directory where you store your OpenShift Container Platform cluster manifest files, open the manifests/cluster-infrastructure-02-config.yml manifest file. Add a nodeNetworking object to the file and specify internal and external network subnet CIDR implementations for the object. Tip For most networking situations, consider setting the standard multiple-subnet configuration. This configuration requires that you set the same IP address ranges in the nodeNetworking.internal.networkSubnetCidr and nodeNetworking.external.networkSubnetCidr parameters. Example of a configured cluster-infrastructure-02-config.yml manifest file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain ... nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> # ... Additional resources Cluster Network Operator configuration .spec.platformSpec.vsphere.nodeNetworking 2.5.7. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.5.7.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.7. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.8. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.9. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.10. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.11. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.12. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.13. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.14. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.15. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 2.16. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.17. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.5.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.5.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.5.10. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 2.5.10.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.5.10.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.5.10.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.5.10.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 2.5.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.5.12. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 2.4. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 2.5. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 2.6. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 2.5.12.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 2.5.13. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Note You can scale the remote workers by creating a worker machineset in a separate subnet. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 2.5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 2.6. Installing a cluster on vSphere in a restricted network In OpenShift Container Platform 4.15, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 2.6.1. Prerequisites You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure . You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 2.6.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 2.6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 2.6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 2.6.4. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.15 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image. Upload the image you downloaded to a location that is accessible from the bastion server. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 2.6.5. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 2.6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Warning You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. You cannot specify more than one datastore path. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.vsphere.clusterOSImage to the image location or name. For example: platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters 2.6.6.1. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 7 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 5 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 6 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 7 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 8 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 9 The vSphere disk provisioning method. 10 The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server. 11 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 12 Provide the contents of the certificate file that you used for your mirror registry. 13 Provide the imageContentSources section from the output of the command to mirror the repository. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 2.6.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.6.6.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 2.6.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.6.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.6.9. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 2.6.10. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 2.6.10.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.6.10.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.6.10.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 2.6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.6.12. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 2.7. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 2.8. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 2.9. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 2.6.12.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 2.6.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster . Set up your registry and configure registry storage . | [
"platform: vsphere: hosts: - role: bootstrap 1 networkDevice: ipAddrs: - 192.168.204.10/24 2 gateway: 192.168.204.1 3 nameservers: 4 - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.11/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.12/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.13/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: compute networkDevice: ipAddrs: - 192.168.204.14/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 8 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 9 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_vsphere/installer-provisioned-infrastructure |
Chapter 65. CM SMS Gateway Component | Chapter 65. CM SMS Gateway Component Available as of Camel version 2.18 Camel-Cm-Sms is an Apache Camel component for the [CM SMS Gateway]( https://www.cmtelecom.com ). It allows to integrate CM SMS API in an application as a camel component. You must have a valid account. More information are available at CM Telecom . cm-sms://sgw01.cm.nl/gateway.ashx?defaultFrom=DefaultSender&defaultMaxNumberOfParts=8&productToken=xxxxx Maven users will need to add the following dependency to their pom.xml for this component: --- <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cm-sms</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> --- 65.1. Options The CM SMS Gateway component has no options. The CM SMS Gateway endpoint is configured using URI syntax: with the following path and query parameters: 65.1.1. Path Parameters (1 parameters): Name Description Default Type host Required SMS Provider HOST with scheme String 65.1.2. Query Parameters (5 parameters): Name Description Default Type defaultFrom (producer) This is the sender name. The maximum length is 11 characters. String) defaultMaxNumberOfParts (producer) If it is a multipart message forces the max number. Message can be truncated. Technically the gateway will first check if a message is larger than 160 characters, if so, the message will be cut into multiple 153 characters parts limited by these parameters. 8 Max(8L)::Int) productToken (producer) Required The unique token to use String) testConnectionOnStartup (producer) Whether to test the connection to the SMS Gateway on startup false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 65.2. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.cm-sms.enabled Enable cm-sms component true Boolean camel.component.cm-sms.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 65.3. Sample You can try this project to see how camel-cm-sms can be integrated in a camel route. | [
"cm-sms://sgw01.cm.nl/gateway.ashx?defaultFrom=DefaultSender&defaultMaxNumberOfParts=8&productToken=xxxxx",
"--- <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cm-sms</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> ---",
"cm-sms:host"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/cm-sms-component |
4.2.3. Track Cumulative IO | 4.2.3. Track Cumulative IO This section describes how to track the cumulative amount of I/O to the system. traceio.stp traceio.stp prints the top ten executables generating I/O traffic over time. In addition, it also tracks the cumulative amount of I/O reads and writes done by those ten executables. This information is tracked and printed out in 1-second intervals, and in descending order. Note that traceio.stp also uses the local variable USDreturn , which is also used by disktop.stp from Section 4.2.1, "Summarizing Disk Read/Write Traffic" . Example 4.7. traceio.stp Sample Output | [
"#! /usr/bin/env stap traceio.stp Copyright (C) 2007 Red Hat, Inc., Eugene Teo <[email protected]> Copyright (C) 2009 Kai Meyer <[email protected]> Fixed a bug that allows this to run longer And added the humanreadable function # This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation. # global reads, writes, total_io probe vfs.read.return { reads[pid(),execname()] += USDreturn total_io[pid(),execname()] += USDreturn } probe vfs.write.return { writes[pid(),execname()] += USDreturn total_io[pid(),execname()] += USDreturn } function humanreadable(bytes) { if (bytes > 1024*1024*1024) { return sprintf(\"%d GiB\", bytes/1024/1024/1024) } else if (bytes > 1024*1024) { return sprintf(\"%d MiB\", bytes/1024/1024) } else if (bytes > 1024) { return sprintf(\"%d KiB\", bytes/1024) } else { return sprintf(\"%d B\", bytes) } } probe timer.s(1) { foreach([p,e] in total_io- limit 10) printf(\"%8d %15s r: %12s w: %12s\\n\", p, e, humanreadable(reads[p,e]), humanreadable(writes[p,e])) printf(\"\\n\") # Note we don't zero out reads, writes and total_io, # so the values are cumulative since the script started. }",
"[...] Xorg r: 583401 KiB w: 0 KiB floaters r: 96 KiB w: 7130 KiB multiload-apple r: 538 KiB w: 537 KiB sshd r: 71 KiB w: 72 KiB pam_timestamp_c r: 138 KiB w: 0 KiB staprun r: 51 KiB w: 51 KiB snmpd r: 46 KiB w: 0 KiB pcscd r: 28 KiB w: 0 KiB irqbalance r: 27 KiB w: 4 KiB cupsd r: 4 KiB w: 18 KiB Xorg r: 588140 KiB w: 0 KiB floaters r: 97 KiB w: 7143 KiB multiload-apple r: 543 KiB w: 542 KiB sshd r: 72 KiB w: 72 KiB pam_timestamp_c r: 138 KiB w: 0 KiB staprun r: 51 KiB w: 51 KiB snmpd r: 46 KiB w: 0 KiB pcscd r: 28 KiB w: 0 KiB irqbalance r: 27 KiB w: 4 KiB cupsd r: 4 KiB w: 18 KiB"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/traceiosect |
Chapter 4. Accessing the registry | Chapter 4. Accessing the registry Use the following sections for instructions on accessing the registry, including viewing logs and metrics, as well as securing and exposing the registry. You can access the registry directly to invoke podman commands. This allows you to push images to or pull them from the integrated registry directly using operations like podman push or podman pull . To do so, you must be logged in to the registry using the podman login command. The operations you can perform depend on your user permissions, as described in the following sections. 4.1. Prerequisites You have access to the cluster as a user with the cluster-admin role. You must have configured an identity provider (IDP). For pulling images, for example when using the podman pull command, the user must have the registry-viewer role. To add this role, run the following command: USD oc policy add-role-to-user registry-viewer <user_name> For writing or pushing images, for example when using the podman push command: The user must have the registry-editor role. To add this role, run the following command: USD oc policy add-role-to-user registry-editor <user_name> Your cluster must have an existing project where the images can be pushed to. 4.2. Accessing the registry directly from the cluster You can access the registry from inside the cluster. Procedure Access the registry from the cluster by using internal routes: Access the node by getting the node's name: USD oc get nodes USD oc debug nodes/<node_name> To enable access to tools such as oc and podman on the node, change your root directory to /host : sh-4.2# chroot /host Log in to the container image registry by using your access token: sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443 sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000 You should see a message confirming login, such as: Login Succeeded! Note You can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure. Since the Image Registry Operator creates the route, it will likely be similar to default-route-openshift-image-registry.<cluster_name> . Perform podman pull and podman push operations against your registry: Important You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project. In the following examples, use: Component Value <registry_ip> 172.30.124.220 <port> 5000 <project> openshift <image> image <tag> omitted (defaults to latest ) Pull an arbitrary image: sh-4.2# podman pull <name.io>/<image> Tag the new image with the form <registry_ip>:<port>/<project>/<image> . The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry: sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image> Note You must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the podman push in the step will fail. To test, you can create a new project to push the image. Push the newly tagged image to your registry: sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image> Note When pushing images to the internal registry, the repository name must use the <project>/<name> format. Using multiple project levels in the repository name results in an authentication error. 4.3. Checking the status of the registry pods As a cluster administrator, you can list the image registry pods running in the openshift-image-registry project and check their status. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure List the pods in the openshift-image-registry project and view their status: USD oc get pods -n openshift-image-registry Example output NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m 4.4. Viewing registry logs You can view the logs for the registry by using the oc logs command. Procedure Use the oc logs command with deployments to view the logs for the container image registry: USD oc logs deployments/image-registry -n openshift-image-registry Example output 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 4.5. Accessing registry metrics The OpenShift Container Registry provides an endpoint for Prometheus metrics . Prometheus is a stand-alone, open source systems monitoring and alerting toolkit. The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. Procedure You can access the metrics by running a metrics query using a cluster role. Cluster role Create a cluster role if you do not already have one to access the metrics: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF Add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user prometheus-scraper <username> Metrics query Get the user token. openshift: USD oc whoami -t Run a metrics query in node or inside a pod, for example: USD curl --insecure -s -u <user>:<secret> \ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20 Example output # HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. # TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit="9f72191",gitVersion="v3.11.0+9f72191-135-dirty",major="3",minor="11+"} 1 # HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. # TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type="Hit"} 5 imageregistry_digest_cache_requests_total{type="Miss"} 24 # HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. # TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type="Hit"} 33 imageregistry_digest_cache_scoped_requests_total{type="Miss"} 44 # HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. # TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 # HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. # TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method="get",quantile="0.5"} 0.01296087 imageregistry_http_request_duration_seconds{method="get",quantile="0.9"} 0.014847248 imageregistry_http_request_duration_seconds{method="get",quantile="0.99"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method="get"} 12.260727916000022 1 The <user> object can be arbitrary, but <secret> tag must use the user token. 4.6. Additional resources For more information on allowing pods in a project to reference images in another project, see Allowing pods to reference images across projects . A kubeadmin can access the registry until deleted. See Removing the kubeadmin user for more information. For more information on configuring an identity provider, see Understanding identity provider configuration . | [
"oc policy add-role-to-user registry-viewer <user_name>",
"oc policy add-role-to-user registry-editor <user_name>",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull <name.io>/<image>",
"sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"oc get pods -n openshift-image-registry",
"NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m",
"oc logs deployments/image-registry -n openshift-image-registry",
"2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF",
"oc adm policy add-cluster-role-to-user prometheus-scraper <username>",
"openshift: oc whoami -t",
"curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20",
"HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/registry/accessing-the-registry |
Chapter 4. Viewing application composition by using the Topology view | Chapter 4. Viewing application composition by using the Topology view The Topology view in the Developer perspective of the web console provides a visual representation of all the applications within a project, their build status, and the components and services associated with them. 4.1. Prerequisites To view your applications in the Topology view and interact with them, ensure that: You have logged in to the web console . You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. You have created and deployed an application on OpenShift Container Platform using the Developer perspective . You are in the Developer perspective . 4.2. Viewing the topology of your application You can navigate to the Topology view using the left navigation panel in the Developer perspective. After you deploy an application, you are directed automatically to the Graph view where you can see the status of the application pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application. The Topology view provides you the option to monitor your applications using the List view. Use the List view icon ( ) to see a list of all your applications and use the Graph view icon ( ) to switch back to the graph view. You can customize the views as required using the following: Use the Find by name field to find the required components. Search results may appear outside of the visible area; click Fit to Screen from the lower-left toolbar to resize the Topology view to show all components. Use the Display Options drop-down list to configure the Topology view of the various application groupings. The options are available depending on the types of components deployed in the project: Expand group Virtual Machines: Toggle to show or hide the virtual machines. Application Groupings: Clear to condense the application groups into cards with an overview of an application group and alerts associated with it. Helm Releases: Clear to condense the components deployed as Helm Release into cards with an overview of a given release. Knative Services: Clear to condense the Knative Service components into cards with an overview of a given component. Operator Groupings: Clear to condense the components deployed with an Operator into cards with an overview of the given group. Show elements based on Pod Count or Labels Pod Count: Select to show the number of pods of a component in the component icon. Labels: Toggle to show or hide the component labels. The Topology view also provides you the Export application option to download your application in the ZIP file format. You can then import the downloaded application to another project or cluster. For more details, see Exporting an application to another project or cluster in the Additional resources section. 4.3. Interacting with applications and components In the Topology view in the Developer perspective of the web console, the Graph view provides the following options to interact with applications and components: Click Open URL ( ) to see your application exposed by the route on a public URL. Click Edit Source code to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. Hover your cursor over the lower left icon on the pod to see the name of the latest build and its status. The status of the application build is indicated as New ( ), Pending ( ), Running ( ), Completed ( ), Failed ( ), and Canceled ( ). The status or phase of the pod is indicated by different colors and tooltips as: Running ( ): The pod is bound to a node and all of the containers are created. At least one container is still running or is in the process of starting or restarting. Not Ready ( ): The pods which are running multiple containers, not all containers are ready. Warning ( ): Containers in pods are being terminated, however termination did not succeed. Some containers may be other states. Failed ( ): All containers in the pod terminated but least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system. Pending ( ): The pod is accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a pod spends waiting to be scheduled as well as the time spent downloading container images over the network. Succeeded ( ): All containers in the pod terminated successfully and will not be restarted. Terminating ( ): When a pod is being deleted, it is shown as Terminating by some kubectl commands. Terminating status is not one of the pod phases. A pod is granted a graceful termination period, which defaults to 30 seconds. Unknown ( ): The state of the pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the pod should be running. After you create an application and an image is deployed, the status is shown as Pending . After the application is built, it is displayed as Running . Figure 4.1. Application topology The application resource name is appended with indicators for the different types of resource objects as follows: CJ : CronJob D : Deployment DC : DeploymentConfig DS : DaemonSet J : Job P : Pod SS : StatefulSet (Knative): A serverless application Note Serverless applications take some time to load and display on the Graph view . When you deploy a serverless application, it first creates a service resource and then a revision. After that, it is deployed and displayed on the Graph view . If it is the only workload, you might be redirected to the Add page. After the revision is deployed, the serverless application is displayed on the Graph view . 4.4. Scaling application pods and checking builds and routes The Topology view provides the details of the deployed components in the Overview panel. You can use the Overview and Details tabs to scale the application pods, check build status, services, and routes as follows: Click on the component node to see the Overview panel to the right. Use the Details tab to: Scale your pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic. Check the Labels , Annotations , and Status of the application. Click the Resources tab to: See the list of all the pods, view their status, access logs, and click on the pod to see the pod details. See the builds, their status, access logs, and start a new build if needed. See the services and routes used by the component. For serverless applications, the Resources tab provides information on the revision, routes, and the configurations used for that component. 4.5. Adding components to an existing project You can add components to a project. Procedure Navigate to the +Add view. Click Add to Project ( ) to left navigation pane or press Ctrl + Space Search for the component and click the Start / Create / Install button or click Enter to add the component to the project and see it in the topology Graph view . Figure 4.2. Adding component via quick search Alternatively, you can also use the available options in the context menu, such as Import from Git , Container Image , Database , From Catalog , Operator Backed , Helm Charts , Samples , or Upload JAR file , by right-clicking in the topology Graph view to add a component to your project. Figure 4.3. Context menu to add services 4.6. Grouping multiple components within an application You can use the +Add view to add multiple components or services to your project and use the topology Graph view to group applications and resources within an application group. Prerequisites You have created and deployed minimum two or more components on OpenShift Container Platform using the Developer perspective. Procedure To add a service to the existing application group, press Shift + drag it to the existing application group. Dragging a component and adding it to an application group adds the required labels to the component. Figure 4.4. Application grouping Alternatively, you can also add the component to an application as follows: Click the service pod to see the Overview panel to the right. Click the Actions drop-down menu and select Edit Application Grouping . In the Edit Application Grouping dialog box, click the Application drop-down list, and select an appropriate application group. Click Save to add the service to the application group. You can remove a component from an application group by selecting the component and using Shift + drag to drag it out of the application group. 4.7. Adding services to your application To add a service to your application use the +Add actions using the context menu in the topology Graph view . Note In addition to the context menu, you can add services by using the sidebar or hovering and dragging the dangling arrow from the application group. Procedure Right-click an application group in the topology Graph view to display the context menu. Figure 4.5. Add resource context menu Use Add to Application to select a method for adding a service to the application group, such as From Git , Container Image , From Dockerfile , From Devfile , Upload JAR file , Event Source , Channel , or Broker . Complete the form for the method you choose and click Create . For example, to add a service based on the source code in your Git repository, choose the From Git method, fill in the Import from Git form, and click Create . 4.8. Removing services from your application In the topology Graph view remove a service from your application using the context menu. Procedure Right-click on a service in an application group in the topology Graph view to display the context menu. Select Delete Deployment to delete the service. Figure 4.6. Deleting deployment option 4.9. Labels and annotations used for the Topology view The Topology view uses the following labels and annotations: Icon displayed in the node Icons in the node are defined by looking for matching icons using the app.openshift.io/runtime label, followed by the app.kubernetes.io/name label. This matching is done using a predefined set of icons. Link to the source code editor or the source The app.openshift.io/vcs-uri annotation is used to create links to the source code editor. Node Connector The app.openshift.io/connects-to annotation is used to connect the nodes. App grouping The app.kubernetes.io/part-of=<appname> label is used to group the applications, services, and components. For detailed information on the labels and annotations OpenShift Container Platform applications must use, see Guidelines for labels and annotations for OpenShift applications . 4.10. Additional resources See Importing a codebase from Git to create an application for more information on creating an application from Git. See Connecting an application to a service using the Developer perspective . See Exporting applications | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/building_applications/odc-viewing-application-composition-using-topology-view |
5.155. libselinux | 5.155. libselinux 5.155.1. RHBA-2012:0907 - libselinux bug fix update Updated libselinux packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libselinux packages contain the core library of an SELinux system. The libselinux library provides an API for SELinux applications to get and set process and file security contexts, and to obtain security policy decisions. It is required for any applications that use the SELinux API, and used by all applications that are SELinux-aware. Bug Fix BZ# 717147 While the libselinux library was waiting on a netlink socket, if the socket received an EINTR signal, it returned an error which could cause applications like dbus to fail. With this update, the library now retries the netlink socket when it receives an EINTR signal, rather than failing. All users of libselinux are advised to upgrade to these updated packages, which fix this bug. 5.155.2. RHEA-2013:0808 - libselinux enhancement update Updated libselinux packages that add one enhancement are now available for Red Hat Enterprise Linux 6 Extended Update Support. The libselinux packages contain the core library of an SELinux system. The libselinux library provides an API for SELinux applications to get and set process and file security contexts, and to obtain security policy decisions. It is required for any applications that use the SELinux API, and used by all applications that are SELinux-aware. Enhancement BZ# 956982 Previously, a substitution of the "/" directory was not directly possible. With this update, support for a substitution of the root directory has been added to allow proper labeling of all directories and files under an alternative root directory. Users of libselinux are advised to upgrade to these updated packages, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libselinux |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: For Red Hat Enterprise Linux based hosts for worker nodes, enable file system access for containers on Red Hat Enterprise Linux based nodes . Note Skip this step for Red Hat Enterprise Linux CoreOS (RHCOS). Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS): Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . 1.1. Enabling file system access for containers on Red Hat Enterprise Linux based nodes Deploying OpenShift Data Foundation on an OpenShift Container Platform with worker nodes on a Red Hat Enterprise Linux base in a user provisioned infrastructure (UPI) does not automatically provide container access to the underlying Ceph file system. Note Skip this step for hosts based on Red Hat Enterprise Linux CoreOS (RHCOS). Procedure Log in to the Red Hat Enterprise Linux based node and open a terminal. For each node in your cluster: Verify that the node has access to the rhel-7-server-extras-rpms repository. If you do not see both rhel-7-server-rpms and rhel-7-server-extras-rpms in the output, or if there is no output, run the following commands to enable each repository: Install the required packages. Persistently enable container use of the Ceph file system in SELinux. 1.2. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy. | [
"subscription-manager repos --list-enabled | grep rhel-7-server",
"subscription-manager repos --enable=rhel-7-server-rpms",
"subscription-manager repos --enable=rhel-7-server-extras-rpms",
"yum install -y policycoreutils container-selinux",
"setsebool -P container_use_cephfs on",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_amazon_web_services/preparing_to_deploy_openshift_data_foundation |
Chapter 3. Important Changes to External Kernel Parameters | Chapter 3. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 7.7. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters usbcore.quirks = [USB] This parameter provides a list of quirk entries to augment the built-in usb core quirk list. The entries are separated by commas. Each entry has the form VendorID:ProductID:Flags . The IDs are 4-digit hex numbers and Flags is a set of letters. Each letter will change the built-in quirk; setting it if it is clear and clearing it if it is set. The letters have the following meanings: a = USB_QUIRK_STRING_FETCH_255 (string descriptors must not be fetched using a 255-byte read); b = USB_QUIRK_RESET_RESUME (device cannot resume correctly so reset it instead); c = USB_QUIRK_NO_SET_INTF (device cannot handle Set-Interface requests); d = USB_QUIRK_CONFIG_INTF_STRINGS (device cannot handle its Configuration or Interface strings); e = USB_QUIRK_RESET (device cannot be reset (e.g morph devices), do not use reset); f = USB_QUIRK_HONOR_BNUMINTERFACES (device has more interface descriptions than the bNumInterfaces count, and cannot handle talking to these interfaces); g = USB_QUIRK_DELAY_INIT (device needs a pause during initialization, after we read the device descriptor); h = USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL (For high speed and super speed interrupt endpoints, the USB 2.0 and USB 3.0 spec require the interval in microframes (1 microframe = 125 microseconds) to be calculated as interval = 2 ^ ( bInterval -1). Devices with this quirk report their bInterval as the result of this calculation instead of the exponent variable used in the calculation); i = USB_QUIRK_DEVICE_QUALIFIER (device cannot handle device_qualifier descriptor requests); j = USB_QUIRK_IGNORE_REMOTE_WAKEUP (device generates spurious wakeup, ignore remote wakeup capability); k = USB_QUIRK_NO_LPM (device cannot handle Link Power Management); l = USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL (Device reports its bInterval as linear frames instead of the USB 2.0 calculation); m = USB_QUIRK_DISCONNECT_SUSPEND (Device needs to be disconnected before suspend to prevent spurious wakeup); n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a pause after every control message); The example entry: ppc_tm = [PPC] Disables Hardware Transactional Memory. Format: {"off"} cgroup.memory = [KNL] Passes options to the cgroup memory controller. Format: <string> nokmem - This option disables kernel memory accounting. mds = [X86,INTEL] Controls mitigation for the Micro-architectural Data Sampling (MDS) vulnerability. Certain CPUs are vulnerable to an exploit against CPU internal buffers which can forward information to a disclosure gadget under certain conditions. In vulnerable processors, the speculatively forwarded data can be used in a cache side channel attack, to access data to which the attacker does not have direct access. The options are: full - Enable MDS mitigation on vulnerable CPUs. full,nosmt - Enable MDS mitigation and disable Simultaneous multithreading (SMT) on vulnerable CPUs. off - Unconditionally disable MDS mitigation. Not specifying this option is equivalent to mds=full . mitigations = [X86,PPC,S390] Controls optional mitigations for CPU vulnerabilities. This is a set of curated, arch-independent options, each of which is an aggregation of existing arch-specific options. The options are: off - Disable all optional CPU mitigations. This improves system performance, but it may also expose users to several CPU vulnerabilities. Equivalent to: nopti [X86,PPC] nospectre_v1 [PPC] nobp=0 [S390] nospectre_v2 [X86,PPC,S390] spec_store_bypass_disable=off [X86,PPC] l1tf=off [X86] mds=off [X86] auto (default) - Mitigate all CPU vulnerabilities, but leave Simultaneous multithreading (SMT) enabled, even if it's vulnerable. This is for users who do not want to be surprised by SMT getting disabled across kernel upgrades, or who have other ways of avoiding SMT-based attacks. Equivalent to: (default behavior) auto,nosmt - Mitigate all CPU vulnerabilities, disabling Simultaneous multithreading (SMT) if needed. This is for users who always want to be fully mitigated, even if it means losing SMT. Equivalent to: l1tf=flush,nosmt [X86] mds=full,nosmt [X86] watchdog_thresh = [KNL] Sets the hard lockup detector stall duration threshold in seconds. The soft lockup detector threshold is set to twice the value. A value of 0 disables both lockup detectors. Default is 10 seconds. novmcoredd [KNL,KDUMP] Disables device dump. The device dump allows drivers to append dump data to vmcore so you can collect driver specified debug info. Drivers can append the data without any limit and this data is stored in memory, so this may cause significant memory stress. Disabling device dump can help save memory but the driver debug data will be no longer available. This parameter is only available when CONFIG_PROC_VMCORE_DEVICE_DUMP is set. Updated kernel parameters resource_alignment Specifies alignment and device to reassign aligned memory resources. Format: [<order of align>@][<domain>:]<bus>:<slot>.<func>[; ... ] [<order of align>@]pci:<vendor>:<device>\[:<subvendor>:<subdevice>][; ... ] If <order of align> is not specified, PAGE_SIZE is used as alignment. PCI-PCI bridge can be specified, if resource windows need to be expanded. irqaffinity = [SMP] Sets the default irq affinity mask. Format: <cpu number>,... ,<cpu number> <cpu number>-<cpu number> drivers (must be a positive range in ascending order) mixture <cpu number>,... ,<cpu number>-<cpu number> Drivers will use drivers' affinity masks for default interrupt assignment instead of placing them all on CPU0. The options are: auto (default) - Mitigate all CPU vulnerabilities, but leave Simultaneous multithreading (SMT) enabled, even if it is vulnerable. This is for users who do not want to be surprised by SMT getting disabled across kernel upgrades, or who have other ways of avoiding SMT-based attacks. Equivalent to: (default behavior) auto,nosmt - Mitigate all CPU vulnerabilities, disabling Simultaneous multithreading (SMT) if needed. This is for users who always want to be fully mitigated, even if it means losing SMT. Equivalent to: l1tf=flush,nosmt [X86] mds=full,nosmt [X86] New /proc/sys/net/core parameters bpf_jit_kallsyms If Berkeley Packet Filter Just in Time compiler is enabled, the compiled images are unknown addresses to the kernel. It means they neither show up in traces nor in the /proc/kallsyms file. This enables export of these addresses, which can be used for debugging/tracing. If the bpf_jit_harden parameter is enabled, this feature is disabled. Possible values are: 0 - Disable Just in Time (JIT) kallsyms export (default value). 1 - Enable Just in Time (JIT) kallsyms export for privileged users only. Updated /proc/sys/fs parameters dentry-state Dentries are dynamically allocated and deallocated. From linux/include/linux/dcache.h : The nr_dentry number shows the total number of dentries allocated (active + unused). The nr_unused number shows the number of dentries that are not actively used, but are saved in the least recently used (LRU) list for future reuse. The age_limit number is the age in seconds after which dcache entries can be reclaimed when memory is short and the want_pages number is nonzero when the shrink_dcache_pages() function has been called and the dcache is not pruned yet. The nr_negative number shows the number of unused dentries that are also negative dentries which do not map to any files. Instead, they help speeding up rejection of non-existing files provided by the users. | [
"quirks=0781:5580:bk,0a5c:5834:gij",
"struct dentry_stat_t dentry_stat { int nr_dentry; int nr_unused; int age_limit; (age in seconds) int want_pages; (pages requested by system) int nr_negative; (# of unused negative dentries) int dummy; (Reserved for future use) };"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.7_release_notes/kernel_parameters_changes |
Chapter 1. Introduction to performance tuning | Chapter 1. Introduction to performance tuning A JBoss EAP installation is optimized by default. However, configurations to your environment, applications, and use of JBoss EAP subsystems can impact performance, meaning additional configuration might be needed. This guide provides optimization recommendations for common JBoss EAP use cases, as well as instructions for monitoring performance and diagnosing performance issues. 1.1. About the use of EAP_HOME in this document In this document, the variable EAP_HOME is used to denote the path to the JBoss EAP installation. Replace this variable with the actual path to your JBoss EAP installation. If you installed the JBoss EAP compressed file, the install directory is the jboss-eap-7.4 directory where you extracted the compressed archive. If you installed JBoss EAP using the RPM install method, the install directory is /opt/rh/eap7/root/usr/share/wildfly/ . If you used the installer to install JBoss EAP, the default path for EAP_HOME is USD{user.home}/EAP-7.4.0 : For Red Hat Enterprise Linux and Solaris: /home/ USER_NAME /EAP-7.4.0/ For Microsoft Windows: C:\Users\ USER_NAME \EAP-7.4.0\ If you used the Red Hat CodeReady Studio installer to install and configure the JBoss EAP server, the default path for EAP_HOME is USD{user.home}/devstudio/runtimes/jboss-eap : For Red Hat Enterprise Linux: /home/ USER_NAME /devstudio/runtimes/jboss-eap/ For Microsoft Windows: C:\Users\ USER_NAME \devstudio\runtimes\jboss-eap or C:\Documents and Settings\ USER_NAME \devstudio\runtimes\jboss-eap\ Note If you set the Target runtime to 7.4 or a later runtime version in Red Hat CodeReady Studio, your project is compatible with the Jakarta EE 8 specification. Note EAP_HOME is not an environment variable. JBOSS_HOME is the environment variable used in scripts. | [
"You should stress test and verify all performance configuration changes under anticipated conditions in a development or testing environment prior to deploying them to production."
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/performance_tuning_guide/about-performance-tuning_default |
5.185. man | 5.185. man 5.185.1. RHBA-2012:0449 - man bug fix update An updated man package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The man package provides the man, apropos, and whatis tools for finding information and documentation about the Linux system. Bug Fixes BZ# 659646 Previously, the Japanese version of the man(1) manual page contained a duplicate line in the specification of the "-p pager" option. This update removes the duplicate. BZ# 749290 Prior to this update, the makewhatis script, which creates the whatis database of manual pages, ignored symbolic links between pages. With this update, the makewhatis script includes symbolic links in the whatis database. All users of man are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/man |
Preface | Preface You can integrate some public clouds and third-party applications with the Hybrid Cloud Console. For information about integrating public clouds, see Configuring cloud integrations for Red Hat services . You can integrate the Red Hat Hybrid Cloud Console with Splunk, ServiceNow, Slack, Event-Driven Ansible, Microsoft Teams, Google Chat, and more applications to route event-triggered notifications to those third-party applications. Integrating third-party applications expands the scope of notifications beyond emails and messages, so that you can view and manage Hybrid Cloud Console events from your preferred platform dashboard or communications tool. To learn more about notifications, see Configuring notifications on the Red Hat Hybrid Cloud Console . Prerequisites You have Organization Administrator or Notifications administrator permissions for the Hybrid Cloud Console. You have the required configuration permissions for each third-party application that you want to integrate with the Hybrid Cloud Console. | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/pr01 |
Chapter 123. KafkaBridgeHttpConfig schema reference | Chapter 123. KafkaBridgeHttpConfig schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeHttpConfig schema properties Configures HTTP access to a Kafka cluster for the Kafka Bridge. The default HTTP configuration is for the Kafka Bridge to listen on port 8080. 123.1. cors As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP access methods. For the origins, you can use a URL or a Java regular expression. Example Kafka Bridge HTTP configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... http: port: 8080 cors: allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" # ... 123.2. KafkaBridgeHttpConfig schema properties Property Description port The port which is the server listening on. integer cors CORS configuration for the HTTP Bridge. KafkaBridgeHttpCors | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # http: port: 8080 cors: allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaBridgeHttpConfig-reference |
Chapter 6. Listing available SCAP contents | Chapter 6. Listing available SCAP contents Use this procedure to view what SCAP contents are already loaded in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Your user account has a role assigned that has the view_scap_contents permission. Procedure In the Satellite web UI, navigate to Hosts > Compliance > SCAP contents . CLI procedure Run the following Hammer command on Satellite Server: | [
"hammer scap-content list --location \" My_Location \" --organization \" My_Organization \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_security_compliance/listing-available-scap-contents_security-compliance |
Chapter 8. NBDE Tang Server Operator | Chapter 8. NBDE Tang Server Operator 8.1. NBDE Tang Server Operator overview Network-bound Disk Encryption (NBDE) provides an automated unlocking of LUKS-encrypted volumes using one or more dedicated network-binding servers. The client side of NBDE is called the Clevis decryption policy framework and the server side is represented by Tang. The NBDE Tang Server Operator allows the automation of deployments of one or several Tang servers in the OpenShift Container Platform (OCP) environment. 8.2. NBDE Tang Server Operator release notes The following release notes track the development of the NBDE Tang Server Operator in OpenShift Container Platform. RHEA-2023:7491 The NBDE Tang Server Operator 1.0 has been released in the Red Hat OpenShift Enterprise 4 catalog. RHEA-2024:0854 The NBDE Tang Server Operator 1.0.1 has been moved from the alpha channel to the stable channel. RHBA-2024:8681 The 1.0.2 update contains fixes that increase the Container Health Index of containers deployed with the NBDE Tang Server Operator to the highest grade. RHEA-2024:10970 The 1.0.3 update contains changes that re-increase the Container Health Index to the highest grade. RHBA-2025:0663 With the NBDE Tang Server Operator 1.1, the golang package is provided in version 1.23.2 and the golang.org/x/net/html package has been updated to version 0.33.0. The updates increase the Container Health Index. 8.3. Understanding the NBDE Tang Server Operator You can use the NBDE Tang Server Operator to automate the deployment of a Tang server in an OpenShift Container Platform cluster that requires Network Bound Disk Encryption (NBDE) internally, leveraging the tools that OpenShift Container Platform provides to achieve this automation. The NBDE Tang Server Operator simplifies the installation process and uses native features provided by the OpenShift Container Platform environment, such as multi-replica deployment, scaling, traffic load balancing, and so on. The Operator also provides automation of certain operations that are error-prone when you perform them manually, for example: server deployment and configuration key rotation hidden keys deletion The NBDE Tang Server Operator is implemented using the Operator SDK and allows the deployment of one or more Tang servers in OpenShift through custom resource definitions (CRDs). 8.3.1. Additional resources Tang-Operator: Providing NBDE in OpenShift Red Hat Hybrid Cloud blog article tang-operator Github project Configuring automated unlocking of encrypted volumes using policy-based decryption chapter in the RHEL 9 Security hardening document 8.4. Installing the NBDE Tang Server Operator You can install the NBDE Tang Operator either by using the web console or through the oc command from CLI. 8.4.1. Installing the NBDE Tang Server Operator using the web console You can install the NBDE Tang Server Operator from the OperatorHub using the web console. Prerequisites You must have cluster-admin privileges on an OpenShift Container Platform cluster. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the NBDE Tang Server Operator: Click Install . On the Operator Installation screen, keep the Update channel , Version , Installation mode , Installed Namespace , and Update approval fields on the default values. After you confirm the installation options by clicking Install , the console displays the installation confirmation. Verification Navigate to the Operators Installed Operators page. Check that the NBDE Tang Server Operator is installed and its status is Succeeded . 8.4.2. Installing the NBDE Tang Server Operator using CLI You can install the NBDE Tang Server Operator from the OperatorHub using the CLI. Prerequisites You must have cluster-admin privileges on an OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). Procedure Use the following command to list available Operators on OperatorHub, and limit the output to Tang-related results: USD oc get packagemanifests -n openshift-marketplace | grep tang Example output tang-operator Red Hat In this case, the corresponding packagemanifest name is tang-operator . Create a Subscription object YAML file to subscribe a namespace to the NBDE Tang Server Operator, for example, tang-operator.yaml : Example subscription YAML for tang-operator apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tang-operator namespace: openshift-operators spec: channel: stable 1 installPlanApproval: Automatic name: tang-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4 1 Specify the channel name from where you want to subscribe the Operator. 2 Specify the name of the Operator to subscribe to. 3 Specify the name of the CatalogSource that provides the Operator. 4 The namespace of the CatalogSource. Use openshift-marketplace for the default OperatorHub CatalogSources. Apply the Subscription to the cluster: USD oc apply -f tang-operator.yaml Verification Check that the NBDE Tang Server Operator controller runs in the openshift-operators namespace: USD oc -n openshift-operators get pods Example output NAME READY STATUS RESTARTS AGE tang-operator-controller-manager-694b754bd6-4zk7x 2/2 Running 0 12s 8.5. Configuring and managing Tang servers using the NBDE Tang Server Operator With the NBDE Tang Server Operator, you can deploy and quickly configure Tang servers. On the deployed Tang servers, you can list existing keys and rotate them. 8.5.1. Deploying a Tang server using the NBDE Tang Server Operator You can deploy and quickly configure one or more Tang servers using the NBDE Tang Server Operator in the web console. Prerequisites You must have cluster-admin privileges on an OpenShift Container Platform cluster. You have installed the NBDE Tang Server Operator on your OCP cluster. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Select Project , and click Create Project : On the Create Project page, fill in the required information, for example: Click Create . NBDE Tang Server replicas require a Persistent Volume Claim (PVC) for storing encryption keys. In the web console, navigate to Storage PersistentVolumeClaims : On the following PersistentVolumeClaims screen, click Create PersistentVolumeClaim . On the Create PersistentVolumeClaim page, select a storage that fits your deployment scenario. Consider how often you want to rotate the encryption keys. Name your PVC and choose the claimed storage capacity, for example: Navigate to Operators Installed Operators , and click NBDE Tang Server . Click Create instance . On the Create TangServer page, choose the name of the Tang Server instance, amount of replicas, and specify the name of the previously created Persistent Volume Claim, for example: After you enter the required values a change settings that differ from the default values in your scenario, click Create . 8.5.2. Rotating keys using the NBDE Tang Server Operator With the NBDE Tang Server Operator, you also can rotate your Tang server keys. The precise interval at which you should rotate them depends on your application, key sizes, and institutional policy. Prerequisites You must have cluster-admin privileges on an OpenShift Container Platform cluster. You deployed a Tang server using the NBDE Tang Server Operator on your OpenShift cluster. You have installed the OpenShift CLI ( oc ). Procedure List the existing keys on your Tang server, for example: USD oc -n nbde describe tangserver Example output ... Status: Active Keys: File Name: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ... Create a YAML file for moving your active keys to hidden keys, for example, minimal-keyretrieve-rotate-tangserver.yaml : Example key-rotation YAML for tang-operator apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: - sha1: "PvYQKtrTuYsMV2AomUeHrUWkCGg" 1 1 Specify the SHA-1 thumbprint of your active key to rotate it. Apply the YAML file: USD oc apply -f minimal-keyretrieve-rotate-tangserver.yaml Verification After a certain amount of time depending on your configuration, check that the activeKey value is the new hiddenKey value and the activeKey key file is newly generated, for example: USD oc -n nbde describe tangserver Example output ... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Hidden Keys: File Name: .QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg.jwk Generated: 2023-10-25 15:37:29.126928965 +0000 Hidden: 2023-10-25 15:38:13.515467436 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ... 8.5.3. Deleting hidden keys with the NBDE Tang Server Operator After you rotate your Tang server keys, the previously active keys become hidden and are no longer advertised by the Tang instance. You can use the NBDE Tang Server Operator to remove encryption keys no longer used. WARNING Do not remove any hidden keys unless you are sure that all bound Clevis clients already use new keys. Prerequisites You must have cluster-admin privileges on an OpenShift Container Platform cluster. You deployed a Tang server using the NBDE Tang Server Operator on your OpenShift cluster. You have installed the OpenShift CLI ( oc ). Procedure List the existing keys on your Tang server, for example: USD oc -n nbde describe tangserver Example output ... Status: Active Keys: File Name: PvYQKtrTuYsMV2AomUeHrUWkCGg.jwk Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ... Create a YAML file for removing all hidden keys, for example, hidden-keys-deletion-tangserver.yaml : Example hidden-keys-deletion YAML for tang-operator apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: [] 1 1 The empty array as the value of the hiddenKeys entry indicates you want to preserve no hidden keys on your Tang server. Apply the YAML file: USD oc apply -f hidden-keys-deletion-tangserver.yaml Verification After a certain amount of time depending on your configuration, check that the active key still exists, but no hidden key is available, for example: USD oc -n nbde describe tangserver Example output ... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Status: Ready: 1 Running: 1 Service External URL: http://35.222.247.84:7500/adv Tang Server Error: No Events: ... 8.6. Identifying URL of a Tang server deployed with the NBDE Tang Server Operator Before you can configure your Clevis clients to use encryption keys advertised by your Tang servers, you must identify the URLs of the servers. 8.6.1. Identifying URL of the NBDE Tang Server Operator using the web console You can identify the URLs of Tang servers deployed with the NBDE Tang Server Operator from the OperatorHub by using the OpenShift Container Platform web console. After you identify the URLs, you use the clevis luks bind command on your clients containing LUKS-encrypted volumes that you want to unlock automatically by using keys advertised by the Tang servers. See the Configuring manual enrollment of LUKS-encrypted volumes section in the RHEL 9 Security hardening document for detailed steps describing the configuration of clients with Clevis. Prerequisites You must have cluster-admin privileges on an OpenShift Container Platform cluster. You deployed a Tang server by using the NBDE Tang Server Operator on your OpenShift cluster. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators Tang Server . On the NBDE Tang Server Operator details page, select Tang Server . The list of Tang servers deployed and available for your cluster appears. Click the name of the Tang server you want to bind with a Clevis client. The web console displays an overview of the selected Tang server. You can find the URL of your Tang server in the Tang Server External Url section of the screen: In this example, the URL of the Tang server is http://34.28.173.205:7500 . Verification You can check that the Tang server is advertising by using curl , wget , or similar tools, for example: USD curl 2> /dev/null http://34.28.173.205:7500/adv | jq Example output { "payload": "eyJrZXlzIj...eSJdfV19", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9", "signature": "AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx" } 8.6.2. Identifying URL of the NBDE Tang Server Operator using CLI You can identify the URLs of Tang servers deployed with the NBDE Tang Server Operator from the OperatorHub by using the CLI. After you identify the URLs, you use the clevis luks bind command on your clients containing LUKS-encrypted volumes that you want to unlock automatically by using keys advertised by the Tang servers. See the Configuring manual enrollment of LUKS-encrypted volumes section in the RHEL 9 Security hardening document for detailed steps describing the configuration of clients with Clevis. Prerequisites You must have cluster-admin privileges on an OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). You deployed a Tang server by using the NBDE Tang Server Operator on your OpenShift cluster. Procedure List details about your Tang server, for example: USD oc -n nbde describe tangserver Example output ... Spec: ... Status: Ready: 1 Running: 1 Service External URL: http://34.28.173.205:7500/adv Tang Server Error: No Events: ... Use the value of the Service External URL: item without the /adv part. In this example, the URL of the Tang server is http://34.28.173.205:7500 . Verification You can check that the Tang server is advertising by using curl , wget , or similar tools, for example: USD curl 2> /dev/null http://34.28.173.205:7500/adv | jq Example output { "payload": "eyJrZXlzIj...eSJdfV19", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9", "signature": "AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx" } 8.6.3. Additional resources Configuring manual enrollment of LUKS-encrypted volumes section in the RHEL 9 Security hardening document. | [
"oc get packagemanifests -n openshift-marketplace | grep tang",
"tang-operator Red Hat",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tang-operator namespace: openshift-operators spec: channel: stable 1 installPlanApproval: Automatic name: tang-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f tang-operator.yaml",
"oc -n openshift-operators get pods",
"NAME READY STATUS RESTARTS AGE tang-operator-controller-manager-694b754bd6-4zk7x 2/2 Running 0 12s",
"oc -n nbde describe tangserver",
"... Status: Active Keys: File Name: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...",
"apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: - sha1: \"PvYQKtrTuYsMV2AomUeHrUWkCGg\" 1",
"oc apply -f minimal-keyretrieve-rotate-tangserver.yaml",
"oc -n nbde describe tangserver",
"... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Hidden Keys: File Name: .QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg.jwk Generated: 2023-10-25 15:37:29.126928965 +0000 Hidden: 2023-10-25 15:38:13.515467436 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...",
"oc -n nbde describe tangserver",
"... Status: Active Keys: File Name: PvYQKtrTuYsMV2AomUeHrUWkCGg.jwk Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...",
"apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: [] 1",
"oc apply -f hidden-keys-deletion-tangserver.yaml",
"oc -n nbde describe tangserver",
"... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Status: Ready: 1 Running: 1 Service External URL: http://35.222.247.84:7500/adv Tang Server Error: No Events: ...",
"curl 2> /dev/null http://34.28.173.205:7500/adv | jq",
"{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }",
"oc -n nbde describe tangserver",
"... Spec: ... Status: Ready: 1 Running: 1 Service External URL: http://34.28.173.205:7500/adv Tang Server Error: No Events: ...",
"curl 2> /dev/null http://34.28.173.205:7500/adv | jq",
"{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/nbde-tang-server-operator |
Chapter 16. CredentialExpiryService | Chapter 16. CredentialExpiryService 16.1. GetCertExpiry GET /v1/credentialexpiry GetCertExpiry returns information related to the expiry component mTLS certificate. 16.1.1. Description 16.1.2. Parameters 16.1.2.1. Query Parameters Name Description Required Default Pattern component - UNKNOWN 16.1.3. Return Type V1GetCertExpiryResponse 16.1.4. Content Type application/json 16.1.5. Responses Table 16.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetCertExpiryResponse 0 An unexpected error response. RuntimeError 16.1.6. Samples 16.1.7. Common object reference 16.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 16.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 16.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 16.1.7.3. V1GetCertExpiryResponse Field Name Required Nullable Type Description Format expiry Date date-time | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/credentialexpiryservice |
Chapter 1. About Red Hat OpenShift Dev Spaces | Chapter 1. About Red Hat OpenShift Dev Spaces Red Hat OpenShift Dev Spaces provides web-based development environments on Red Hat OpenShift with an enterprise-level setup: Cloud Development Environments (CDE) server IDEs such as Microsoft Visual Studio Code - Open Source and JetBrains IntelliJ IDEA Community ( Technology Preview ) Containerized environments with popular programming languages, frameworks, and Red Hat technologies Red Hat OpenShift Dev Spaces is well-suited for container-based development. Red Hat OpenShift Dev Spaces 3.14 is based on Eclipse Che 7.86. 1.1. Supported platforms OpenShift Dev Spaces runs on OpenShift 4.12-4.15 on the following CPU architectures: AMD64 and Intel 64 ( x86_64 ) IBM Power ( ppc64le ) and IBM Z ( s390x ) Additional resources OpenShift Documentation Red Hat OpenShift Dev Spaces administration guide 1.2. Support policy For Red Hat OpenShift Dev Spaces 3.14, Red Hat will provide support for deployment, configuration, and use of the product. Additional resources OpenShift Dev Spaces life-cycle and support policy . 1.3. Differences between Red Hat OpenShift Dev Spaces and Eclipse Che There are some differences between Red Hat OpenShift Dev Spaces and the upstream project on which it is based, Eclipse Che: OpenShift Dev Spaces is supported only on Red Hat OpenShift. OpenShift Dev Spaces is based on Red Hat Enterprise Linux and is regularly updated to include the latest security fixes. OpenShift Dev Spaces provides devfiles for working with languages and technologies such as Quarkus, Lombok, NodeJS, Python, DotNet, Golang, C/C++, and PHP. You can find the latest sample projects in the devspaces-devfileregistry container image sources . OpenShift Dev Spaces uses OpenShift OAuth for user login and management. Red Hat provides licensing and packaging to ensure enterprise-level support for OpenShift Dev Spaces. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/release_notes_and_known_issues/about-devspaces_devspaces |
8.82. kernel | 8.82. kernel 8.82.1. RHSA-2015:0062 - Important: kernel security, bug fix, and enhancement update Updated kernel packages that fix multiple security issues, several bugs, and add one enhancement are now available for Red Hat Enterprise Linux 6.5 Extended Update Support. Red Hat Product Security has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-3673 , CVE-2014-3687 , Important A flaw was found in the way the Linux kernel's SCTP implementation handled malformed or duplicate Address Configuration Change Chunks (ASCONF). A remote attacker could use either of these flaws to crash the system. CVE-2014-3688 , Important A flaw was found in the way the Linux kernel's SCTP implementation handled the association's output queue. A remote attacker could send specially crafted packets that would cause the system to use an excessive amount of memory, leading to a denial of service. CVE-2014-5045 , Moderate A flaw was found in the way the Linux kernel's VFS subsystem handled reference counting when performing unmount operations on symbolic links. A local, unprivileged user could use this flaw to exhaust all available memory on the system or, potentially, trigger a use-after-free error, resulting in a system crash or privilege escalation. CVE-2014-4608 , Low An integer overflow flaw was found in the way the lzo1x_decompress_safe() function of the Linux kernel's LZO implementation processed Literal Runs. A local attacker could, in extremely rare cases, use this flaw to crash the system or, potentially, escalate their privileges on the system. Red Hat would like to thank Vasily Averin of Parallels for reporting CVE-2014-5045, and Don A. Bailey from Lab Mouse Security for reporting CVE-2014-4608. The CVE-2014-3673 issue was discovered by Liu Wei of Red Hat. Bug Fixes BZ# 1108360 Before this update, under certain conditons, the kernel timer could cause the Intelligent Platform Management Interface (IPMI) driver to become unresponsive, resulting in high CPU load. With this update, a patch has been applied, and the IPMI driver no longer hangs. BZ# 1109270 , BZ# 1109712 Previously, when error recovery was restarted, the Orthonormal Basis Functions (OBF) timer in the KCS driver was not reset, which led to an immediate timeout. As a consequence, these timing issues caused caused ipmi to become unresponsive. In addition, numerous error messages were filling up the /var/log/messages file and causing high CPU usage. With this update, patches have been applied to fix this bug, and ipmi no longer hangs in the described situation. BZ# 1135993 Due to certain kernel changes, the TCP Small Queues (TSQ) process did not handle Nagle's algorithm properly when a TCP session became throttled. The underlying source code has been patched, and Nagle's algorithm now works correctly in TSQ. BZ# 1140976 Before this update, due to a bug in the error-handling path, corrupted metadata block could be used as a valid block. With this update, the error handling path is fixed and more checks are added to verify the metadata block. Now, when a corrupted metadata block is encountered, it is properly marked as corrupted and handled accordingly. BZ# 1154087 , BZ# 1158321 Previously, log forces with relatively little free stack available occurred deep in the call chain. As a consequence, a stack overflew in the ( XFS ) file system and the system could terminate unexpectedly. To fix this bug, moving log forces to a work queue relieves the stack pressure and avoids the system crash. BZ# 1158324 Before this update, TCP transmit interrupts could not be set lower than the default of 8 buffered tx frames, which under certain conditions led to TCP transmit delays occurring on ixgbe adapters. With this update, code change removes the restriction of minimum 8 buffered frames and now allows minimum of 1 frame a transmit to occur. And as a result, transmit delays are now minimized. BZ# 1165984 Previously, a coding error in Ethernet 100 driver update caused improper initialization for certain Physical Layers (PHYs) and return of RX errors. With this update, the coding error has been fixed, and the device driver works properly. BZ# 1158327 Before this update, the frame buffer (offb) driver did not support setting of the color palette registers on the QEMU standard VGA adapter, which caused incorrect color displaying. The offb driver has been updated for the QEMU standard VGA adapter, fixing the color issues. BZ# 1142569 Before this update, several race conditions occurred between PCI error recovery callbacks and potential calls of the ifup and ifdown commands in the tg3 driver. When triggered, these race conditions could cause unexpected kernel termination. This bug has been fixed, and the kernel no longer crashes. BZ# 1158889 , BZ# 1162748 Due to hardware bug conditions during Top Segmentation Offload (TSO) fragment processing, there was a page allocation failure in kernel and packets were not transmitted. With this update, more generic Generic Segmentation Offload (GSO) is used as a fallback when TSO fragment processing fails, and packets are now successfully transmitted. BZ# 1163397 Previously, the kernel became unresponsive when using a zombie PID and cgroup. To fix this bug, a patch has been applied, and the kernel no longer hangs. BZ# 1165000 Previously, under certain error conditions gfs2_converter introduced incorrect values for the on-disk inode's di_goal_meta field. As a consequence, gfs2_converter returned the EBADSLT error on such inodes and did not allow creation of the new files in directories or new blocks in regular files. The fix allows gfs2_converter to set a sensible goal value if a corrupt one is encountered and proceed with normal operations. With this update, gfs2_converter implicitly fixes any corrupt goal values, and thus no longer disrupts normal operations. BZ# 1169403 Previously, certain error conditions led to messages being sent to system logs. These messages could become lost instead of being logged, or repeated messages were not suppressed. In extreme cases, the resulting logging volume could cause system lockups or other problems. The relevant test has been reversed to fix this bug, and frequent messages are now suppressed and infrequent messages logged as expected. Enhancement BZ# 1167209 This update adds fixes from Emulex and Oracle Enterprise Management (OEM) qualifications including latest fixes for Skyhawk hardware to the Emulex be2iscsi driver. Users of kernel are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add this enhancement. The system must be rebooted for this update to take effect. 8.82.2. RHSA-2014:1668 - Important: kernel security, bug fix, and enhancement update Updated kernel packages that fix one security issue, several bugs, and add one enhancement are now available for Red Hat Enterprise Linux 6.5 Extended Update Support. The Red Hat Security Response Team has rated this update as having Important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-5077 , Important A NULL pointer dereference flaw was found in the way the Linux kernel's Stream Control Transmission Protocol (SCTP) implementation handled simultaneous connections between the same hosts. A remote attacker could use this flaw to crash the system. Bug Fixes BZ# 1110839 Due to a bug in the kernel signal handling, the decimal floating point (DFP) operations could have been executed with an incorrect rounding mode. As a consequence, DFP calculations could return incorrect or corrupted results. This update fixes this problem by replacing a simple bit mask that was previously used to verify validity of some values in the floating point control register. The bit mask is replaced by a trial load of the floating point control register. BZ# 1140163 Previously, when freeing a large number of huge pages (several TB), the kernel could experience soft lockup events. This could possibly result in performance problems. The memory management code has been modified to increase a chance of a context switch in this situation, which prevents occurrence of soft lockup events. BZ# 1122102 A bug in the nouveau driver could prevent the main display of a Lenovo ThinkPad W530 laptop from being initialized after the system was resumed from suspend. This happened if the laptop had an external screen that was detached while the system was suspended. This problem has been fixed by backporting an upstream patch related to the DisplayPort interface. BZ# 1139807 Due to race conditions in the IP Virtual server (IPVS) code, the kernel could trigger a general protection fault when running the IPVS connection synchronization daemon. With this update, the race conditions in the IPVS code have been addressed, and the kernel no longer crashes when running the IPVS daemon. BZ# 1139345 The kernel could sometimes panic due to a possible division by zero in the kernel scheduler. This bug has been fixed by defining a new div64_ul() division function and correcting the affected calculation in the proc_sched_show_task() function. BZ# 1125980 Removing the rtsc_pci_ms kernel module on some Lenovo ThinkPad series laptops could result in a kernel panic. This update resolves this problem by correcting a bug in the base drivers function, platform_uevent(). BZ# 1125994 A bug in the Linux Netpoll API could result in a kernel oops if the system had the netconsole service configured over a bonding device. With this update, incorrect flag usage in the netpoll_poll_dev() function has been fixed and the kernel no longer crashes due to this bug. BZ# 1127580 The kernel did not handle exceptions caused by an invalid floating point control (FPC) register, resulting in a kernel oops. This problem has been fixed by placing the label to handle these exceptions to the correct place in the code. BZ# 1138301 Previously, certain network device drivers did not accept ethtool commands right after they were mounted. As a consequence, the current setting of the specified device driver was not applied and an error message was returned. The ETHTOOL_DELAY variable has been added, which makes sure the ethtool utility waits for some time before it tries to apply the options settings, thus fixing the bug. BZ# 1130630 A rare race between the file system unmount code and the file system notification code could lead to a kernel panic. With this update, a series of patches has been applied to the kernel to prevent this problem. BZ# 1131137 A bug in the bio layer could prevent user space programs from writing data to disk when the system run under heavy RAM memory fragmentation conditions. This problem has been fixed by modifying a respective function in the bio layer to refuse to add a new memory page only if the page would start a new memory segment and the maximum number of memory segments has already been reached. BZ# 1135713 Due to a bug in the ext3 code, the fdatasync() system call did not force the inode size change to be written to the disk if it was the only metadata change in the file. This could result in the wrong inode size and possible data loss if the system terminated unexpectedly. The code handling inode updates has been fixed and fdatasync() now writes data to the disk as expected in this situation. BZ# 1134258 Previously, the openvswitch driver did not handle frames that contained multiple VLAN headers correctly, which could result in a kernel panic. This update fixes the problem and ensures that openvswitch process such frames correctly. BZ# 1134696 Later Intel CPUs added a new "Condition Changed" bit to the MSR_CORE_PERF_GLOBAL_STATUS register. Previously, the kernel falsely assumed that this bit indicates a performance interrupt, which prevented other NMI handlers from running and executing. To fix this problem, a patch has been applied to the kernel to ignore this bit in the perf code, enabling other NMI handlers to run. BZ# 1135393 After the VLAN devices over the virtio_net driver were allowed to use the TCP Segmentation Offload (TSO) feature, the segmentation of packets was moved from virtual machines to the host. However, some devices cannot handle TSO using the 8021q module, and are breaking the packets, which resulted in very low throughput (less than 1 Mbps) and transmission of broken packets over the wire. Until this problem is properly fixed, a patch that allows using of the TSO feature has been reverted; the segmentation is now performed again on virtual machines as and the network throughput is normal. BZ# 1141165 Due to a race condition in the IP Virtual server (IPVS) code, the kernel could trigger a panic when processing packets from the same connection on different CPUs. This update adds missing spin locks to the code that hashes and unhashes connections from the connection table, and ensures that all packets from the same connection are processed by a single CPU. BZ# 1129994 Previously, small block random I/O operations on IBM Power 8 machines using Emulex 16 Gb Fibre Channel (FC) Host Bus Adapter (HBA) could become unresponsive due to a bug in the lpfc driver. To fix this problem, a memory barrier has been added to the lpfc code to ensure that a valid bit is read before the CQE payload. BZ# 1126681 Running the "bridge link show" command on a system with configured bridge devices could trigger a kernel panic. This happened because all RTNL message types were not properly unregistered from the bridge module registers. This update ensures that both RTNL message types are correctly unregistered and the kernel panic no longer occurs in this situation. BZ# 1114406 Previously, the NFS server did not handle correctly situations when multiple NFS clients were appending data to a file using write delegations, and the data might become corrupted. This update fixes this bug by adjusting a NFS cache validity check in the relevant NFS code, and the file accessed in this scenario now contains valid data. BZ# 1131977 Previously, the IPv4 routing code allowed the IPv4 garbage collector to run in parallel on multiple CPUs with the exact configuration. This could greatly decrease performance of the system, and eventually result in soft lockups after the system reached certain load. To resolve this problem and improve performance of the garbage collector, the collector has been moved to the work queue where it is run asynchronously. Enhancements BZ# 1133834 A new "nordirplus" option has been implemented for the exportfs utility for NFSv3. This option allows the user to disable READDIRPLUS requests for the given NFSv3 export, and thus prevent unwanted disk access in certain scenarios. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add this enhancement. The system must be rebooted for this update to take effect. 8.82.3. RHSA-2014:1167 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues, several bugs, and add one enhancement are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-0205 , Important A flaw was found in the way the Linux kernel's futex subsystem handled reference counting when requeuing futexes during futex_wait(). A local, unprivileged user could use this flaw to zero out the reference counter of an inode or an mm struct that backs up the memory area of the futex, which could lead to a use-after-free flaw, resulting in a system crash or, potentially, privilege escalation. CVE-2014-3535 , Important A NULL pointer dereference flaw was found in the way the Linux kernel's networking implementation handled logging while processing certain invalid packets coming in via a VxLAN interface. A remote attacker could use this flaw to crash the system by sending a specially crafted packet to such an interface. CVE-2014-3917 , Moderate An out-of-bounds memory access flaw was found in the Linux kernel's system call auditing implementation. On a system with existing audit rules defined, a local, unprivileged user could use this flaw to leak kernel memory to user space or, potentially, crash the system. CVE-2014-4667 , Moderate An integer underflow flaw was found in the way the Linux kernel's Stream Control Transmission Protocol (SCTP) implementation processed certain COOKIE_ECHO packets. By sending a specially crafted SCTP packet, a remote attacker could use this flaw to prevent legitimate connections to a particular SCTP server socket to be made. Red Hat would like to thank Gopal Reddy Kodudula of Nokia Siemens Networks for reporting CVE-2014-4667. The security impact of the CVE-2014-0205 issue was discovered by Mateusz Guzik of Red Hat. Bug Fixes BZ#1089359 Previously, NFSv4 allowed an NFSv4 client to resume an expired or lost file lock. This could result in file corruption if the file was modified in the meantime. This problem has been resolved by a series of patches ensuring that an NFSv4 client no longer attempts to recover expired or lost file locks. BZ#1090613 A false positive bug in the NFSv4 code could result in a situation where an NFS4ERR_BAD_STATEID error was being resent in an infinite loop instead of a bad state ID being recovered. To fix this problem, a series of patches has been applied to the NFSv4 code. The NFS client no longer retries an I/O operation that resulted in a bad state ID error if the nfs4_select_rw_stateid() function returns an -EIO error. BZ#1120651 A change to the Open vSwitch kernel module introduced a use-after-free problem that resulted in a kernel panic on systems that use this module. This update ensures that the affected object is freed on the correct place in the code, thus avoiding the problem. BZ#1118782 Previously, the Huge Translation Lookaside Buffer (HugeTLB) unconditionally allowed access to huge pages. However, huge pages may be unsupported in some environments, such as a KVM guest on the PowerPC architecture when not backed by huge pages, and an attempt to use a base page as a huge page in memory would result in a kernel oops. This update ensures that HugeTLB denies access to huge pages if the huge pages are not supported on the system. BZ#1096397 NFSv4 incorrectly handled a situation when an NFS client received an NFS4ERR_ADMIN_REVOKED error after sending a CLOSE operation. As a consequence, the client kept sending the same CLOSE operation indefinitely although it was receiving NFS4ERR_ADMIN_REVOKED errors. A patch has been applied to the NFSv4 code to ensure that the NFS client sends the particular CLOSE operation only once in this situation. BZ#1099607 NFS previously called the drop_nlink() function after removing a file to directly decrease a link count on the related inode. Consequently, NFS did not revalidate an inode cache, and could thus use a stale file handle, resulting in an ESTALE error. A patch has been applied to ensure that NFS validates the inode cache correctly after removing a file. BZ#1117582 A change to the SCSI code fixed a race condition that could occur when removing a SCSI device. However, that change caused performance degradation because it used a certain function from the block layer code that was returning different values compared with later versions of the kernel. This update alters the SCSI code to properly utilize the values returned by the block layer code. BZ#1102794 Previously, when using a bridge interface configured on top of a bonding interface, the bonding driver was not aware of IP addresses assigned to the bridge. Consequently, with ARP monitoring enabled, the ARP monitor could not target the IP address of the bridge when probing the same subnet. The bridge was thus always reported as being down and could not be reached. With this update, the bonding driver has been made aware of IP addresses assigned to a bridge configured on top of a bonding interface, and the ARP monitor can now probe the bridge as expected. Note that the problem still occurs if the arp_validate option is used. Therefore, do not use this option in this case until this issue is fully resolved. BZ#1113824 The automatic route cache rebuilding feature could incorrectly compute the length of a route hash chain if the cache contained multiple entries with the same key but a different TOS, mark, or OIF bit. Consequently, the feature could reach the rebuild limit and disable the routing cache on the system. This problem is fixed by using a helper function that avoids counting such duplicate routes. BZ#1121541 Due to a race condition that allowed a RAID array to be written to while it was being stopped, the md driver could enter a deadlock situation. The deadlock prevented buffers from being written out to the disk, and all I/O operations to the device became unresponsive. With this update, the md driver has been modified so this deadlock is now avoided. BZ#1112226 When booting a guest in the Hyper-V environment and enough of Programmable Interval Timer (PIT) interrupts were lost or not injected into the guest on time, the kernel panicked and the guest failed to boot. This problem has been fixed by bypassing the relevant PIT check when the guest is running under the Hyper-V environment. All users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.82.4. RHSA-2014:0981 - Important: kernel security, bug fix, and enhancement update Updated kernel packages that fix multiple security issues, several bugs, and add one enhancement are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-2851 , Important A use-after-free flaw was found in the way the ping_init_sock() function of the Linux kernel handled the group_info reference counter. A local, unprivileged user could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2014-6647 , Moderate A NULL pointer dereference flaw was found in the way the futex_wait_requeue_pi() function of the Linux kernel's futex subsystem handled the requeuing of certain Priority Inheritance (PI) futexes. A local, unprivileged user could us this flaw to crash the system. CVE-2014-7339 , Moderate A NULL pointer dereference flaw was found in the rds_ib_laddr_check() function in the Linux kernel's implementation of Reliable Datagram Sockets (RDS). A local, unprivileged user could use this flaw to crash the system. CVE-2014-2672 , Moderate It was found that a remote attacker could use a race condition flaw in the ath_tx_aggr_sleep() function to crash the system by creating large network traffic on the system's Atheros 9k wireless network adapter. CVE-2014-2678 , Moderate A NULL pointer dereference flaw was found in the rds_iw_laddr_check() function in the Linux kernel's implementation of Reliable Datagram Sockets (RDS). A local, unprivileged user could use this flaw to crash the system. CVE-2014-2706 , Moderate A race condition flaw was found in the way the Linux kernel's mac80211 subsystem implementation handled synchronization between TX and STA wake-up code paths. A remote attacker could use this flaw to crash the system. CVE-2014-3144 , CVE-2014-3145 , Moderate An out-of-bounds memory access flaw was found in the Netlink Attribute extension of the Berkeley Packet Filter (BPF) interpreter functionality in the Linux kernel's networking implementation. A local, unprivileged user could use this flaw to crash the system or leak kernel memory to user space via a specially crafted socket filter. Bug Fixes BZ#1107503 Due to a bug in the mount option parser, prefix paths on a CIFS DFS share could be prepended with a double backslash ('\\'), resulting in an incorrect "No such file" error in certain environments. The mount option parser has been fixed and prefix paths now starts with a single backslash as expected. BZ#1110170, BZ#1110169, BZ#1110168, BZ#1109885, BZ#1109883 Several concurrency problems, that could result in data corruption, were found in the implementation of CTR and CBC modes of operation for AES, DES, and DES3 algorithms on IBM S/390 systems. Specifically, a working page was not protected against concurrency invocation in CTR mode. The fallback solution for not getting a working page in CTR mode did not handle iv values correctly. The CBC mode used did not properly save and restore the key and iv values in some concurrency situations. All these problems have been addressed in the code and the concurrent use of the aforementioned algorithms no longer cause data corruption. BZ#1090749 In cluster environment, the multicast traffic from the guest to a host could be sometimes unreliable. An attempt to resolve this problem was made with the RHSA-2013-1645 advisory, however, that attempt introduced a regression. This update reverts patches for this problem provided by RHSA-2013-1645 and introduces a new fix of the problem. The problem has been resolved by flooding the network with multicast packets if the multicast querier is disabled and no other querier has been detected. BZ#1106472 The bridge MDB RTNL handlers were incorrectly removed after deleting a bridge from the system with more then one bridge configured. This led to various problems, such as that the multicast IGMP snooping data from the remaining bridges were not displayed. This update ensures that the bridge handlers are removed only after the bridge module is unloaded, and the multicast IGMP snooping data now displays correctly in the described situation. BZ#1100574 Due to a bug in the nouveau kernel module, the wrong display output could be modified in certain multi-display configurations. Consequently, on Lenovo Thinkpad T420 and W530 laptops with an external display connected, this could result in the LVDS panel "bleeding" to white during startup, and the display controller might become non-functional until after a reboot. Changes to the display configuration could also trigger the bug under various circumstances. With this update, the nouveau kernel module has been corrected and the said configurations now work as expected. BZ#1103821 When guest supports Supervisor Mode Execution Protection (SMEP), KVM sets the appropriate permissions bits on the guest page table entries (sptes) to emulate SMEP enforced access. Previously, KVM was incorrectly verifying whether the "smep" bit was set in the host cr4 register instead of the guest cr4 register. Consequently, if the host supported SMEP, it was enforced even though it was not requested, which could render the guest system unbootable. This update corrects the said "smep" bit check and the guest system boot as expected in this scenario. BZ#1096059 Previously, if a hrtimer interrupt was delayed, all future pending hrtimer events that were queued on the same processor were also delayed until the initial hrtimer event was handled. This could cause all hrtimer processing to stop for a significant period of time. To prevent this problem, the kernel has been modified to handle all expired hrtimer events when handling the initially delayed hrtimer event. BZ#1099725 Previously, hardware could execute commands send by drivers in FIFO order instead of tagged order. Commands thus could be executed out of sequence, which could result in large latencies and degradation of throughput. With this update, the ATA subsystem tags each command sent to the hardware, ensuring that the hardware executes commands in tagged order. Performance on controllers supporting tagged commands can now increase by 30-50%. BZ#1107931 Due to a bug in the GRE tunneling code, it was impossible to create a GRE tunnel with a custom name. This update corrects behavior of the ip_tunnel_find() function, allowing users to create GRE tunnels with custom names. BZ#1110658 The qla2xxx driver has been upgraded to version 8.05.00.03.06.5-k2, which provides a number of bug fixes over the version in order to correct various timeout problems with the mailbox command. BZ#1093984 The kernel previously did not reset the kernel ring buffer if the trace clock was changed during tracing. However, the new clock source could be inconsistent with the clock source, and the result trace record thus could contain incomparable time stamps. To ensure that the trace record contains only comparable time stamps, the ring buffer is now reset whenever the trace clock changes. BZ#1103972 Previously, KVM did not accept PCI domain (segment) number for host PCI devices, making it impossible to assign a PCI device that was a part of a non-zero PCI segment to a virtual machine. To resolve this problem, KVM has been extended to accept PCI domain number in addition to slot, device, and function numbers. Enhancement BZ#1094403 Users can now set ToS, TTL, and priority values in IPv4 on per-packet basis. All users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.82.5. RHSA-2014:0771 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-3153 , Important A flaw was found in the way the Linux kernel's futex subsystem handled the requeuing of certain Priority Inheritance (PI) futexes. A local, unprivileged user could use this flaw to escalate their privileges on the system. CVE-2014-1737 , Important A flaw was found in the way the Linux kernel's floppy driver handled user space provided data in certain error code paths while processing FDRAWCMD IOCTL commands. A local user with write access to /dev/fdX could use this flaw to free (using the kfree() function) arbitrary kernel memory. CVE-2014-1738 , Low It was found that the Linux kernel's floppy driver leaked internal kernel memory addresses to user space during the processing of the FDRAWCMD IOCTL command. A local user with write access to /dev/fdX could use this flaw to obtain information about the kernel heap arrangement. Note A local user with write access to /dev/fdX could use these two flaws (CVE-2014-1737 in combination with CVE-2014-1738) to escalate their privileges on the system. CVE-2014-0203 , Moderate It was discovered that the proc_ns_follow_link() function did not properly return the LAST_BIND value in the last pathname component as is expected for procfs symbolic links, which could lead to excessive freeing of memory and consequent slab corruption. A local, unprivileged user could use this flaw to crash the system. CVE-2014-2039 , Moderate A flaw was found in the way the Linux kernel handled exceptions when user-space applications attempted to use the linkage stack. On IBM S/390 systems, a local, unprivileged user could use this flaw to crash the system. CVE-2013-6378 , Low An invalid pointer dereference flaw was found in the Marvell 8xxx Libertas WLAN (libertas) driver in the Linux kernel. A local user able to write to a file that is provided by the libertas driver and located on the debug file system (debugfs) could use this flaw to crash the system. Note: The debugfs file system must be mounted locally to exploit this issue. It is not mounted by default. CVE-2014-1874 , Low A denial of service flaw was discovered in the way the Linux kernel's SELinux implementation handled files with an empty SELinux security context. A local user who has the CAP_MAC_ADMIN capability could use this flaw to crash the system. Red Hat would like to thank Kees Cook of Google for reporting CVE-2014-3153, Matthew Daley for reporting CVE-2014-1737 and CVE-2014-1738, and Vladimir Davydov of Parallels for reporting CVE-2014-0203. Google acknowledges Pinkie Pie as the original reporter of CVE-2014-3153. Bug Fixes BZ#1086839 Due to a ndlp list corruption bug in the lpfc driver, systems with Emulex LPe16002B-M6 PCIe 2-port 16Gb Fibre Channel Adapters could trigger a kernel panic during I/O operations. A series of patches has been backported to address this problem so the kernel no longer panics during I/O operations on the aforementioned systems. BZ#1096214 A change enabled receive acceleration for VLAN interfaces configured on a bridge interface. However, this change allowed VLAN-tagged packets to bypass the bridge and be delivered directly to the VLAN interfaces. This update ensures that the traffic is correctly processed by a bridge before it is passed to any VLAN interfaces configured on that bridge. BZ#1090750 A change that introduced global clock updates caused guest machines to boot slowly when the host Time Stamp Counter (TSC) was marked as unstable. The slow down increased with the number of vCPUs allocated. To resolve this problem, a patch has been applied to limit the rate of the global clock updates. BZ# 1094287 Due to a bug in the ixgbevf driver, the stripped VLAN information from incoming packets on the ixgbevf interface could be lost, and such packets thus did not reach a related VLAN interface. This problem has been fixed by adding the packet's VLAN information to the Socket Buffer (skb) before passing it to the network stack. As a result, the ixgbevf driver now passes the VLAN-tagged packets to the appropriate VLAN interface. BZ#1089915 A race condition between completion and timeout handling in the block device code could sometimes trigger a BUG_ON() assertion, resulting in a kernel panic. This update resolves this problem by relocating a relevant function call and the BUG_ON() assertion in the code. BZ#1088779 Systems that use NFS file systems could become unresponsive or trigger a kernel oops due to a use-after-free bug in the duplicate reply cache (DRC) code in the nfsd daemon. This problem has been resolved by modifying nfsd to unhash DRC entries before attempting to use them and to prefer to allocate a new DRC entry from the slab instead of reusing an expired entry from the list. BZ#1092002 When an attempt to create a file on the GFS2 file system failed due to a file system quota violation, the relevant VFS inode was not completely uninitialized. This could result in a list corruption error. This update resolves this problem by correctly uninitializing the VFS inode in this situation. BZ#1069630 Previously, automount could become unresponsive when trying to reconnect to mounts with the direct or offset mount types at system startup. This happened because the device ioctl code did not handle the situation when the relevant caller did not yet own the mount. Also, the umount() command sometimes failed to unmount an NFS file system with the stale root. Both problems have been addressed in the virtual file system code, and automount is now able to mount direct or offset mounts using a new lookup function, kern_path_mountpoint(). The umount() command now handles mount points without their revalidation, which allows the command to unmount NFS file systems with the stale root. BZ#1091424 The kernel did not handle environmental and power warning (EPOW) interrupts correctly. This prevented successful usage of the "virsh shutdown" command to shut down guests on IBM POWER8 systems. This update ensures that the kernel handles EPOW events correctly and also prints informative descriptions for the respective EPOW events. The detailed information about each encountered EPOW can be found in the Real-Time Abstraction Service (RTAS) error log. BZ#1081915 Due to a race condition in the cgroup code, the kernel task scheduler could trigger a kernel panic when it was moving an exiting task between cgroups. A patch has been applied to avoid this kernel panic by replacing several improperly used function calls in the cgroup code. BZ#1081909 An incorrectly placed function call in the cgroup code prevented the notify_on_release functionality from working properly. This functionality is used to remove empty cgroup directories, however due to this bug, some empty cgroup directories were remaining on the system. This update ensures that the notify_on_release functionality is always correctly triggered by correctly ordering operations in the cgroup_task_migrate() function. BZ#1081914 Due to a race condition in the cgroup code, the kernel task scheduler could trigger a use-after-free bug when it was moving an exiting task between cgroups, which resulted in a kernel panic. This update avoids the kernel panic by introducing a new function, cpu_cgroup_exit(). This function ensures that the kernel does not release a cgroup that is not empty yet. BZ#1079869 Due to a bug in the hrtimers subsystem, the clock_was_set() function called an inter-processor interrupt (IPI) from soft IRQ context and waited for its completion, which could result in a deadlock situation. A patch has been applied to fix this problem by moving the clock_was_set() function call to the working context. Also during the resume process, the hrtimers_resume() function reprogrammed kernel timers only for the current CPU because it assumed that all other CPUs are offline. However, this assumption was incorrect in certain scenarios, such as when resuming a Xen guest with some non-boot CPUs being only stopped with IRQs disabled. As a consequence, kernel timers were not corrected on other than the boot CPU even though those CPUs were online. To resolve this problem, hrtimers_resume() has been modified to trigger an early soft IRQ to correctly reprogram kernel timers on all CPUs that are online. BZ#1080104 Due to a change that altered the format of the txselect parameter, the InfiniBand qib driver was unable to support HP branded QLogic QDR InfiniBand cards in HP Blade servers. To resolve this problem, the driver's parsing routine, setup_txselect(), has been modified to handle multi-value strings. BZ#1075653 A change to the virtual file system (VFS) code included the reduction of the PATH_MAX variable by 32 bytes. However, this change was not propagated to the do_getname() function, which had a negative impact on interactions between the getname() and do_getname() functions. This update modifies do_getname() accordingly and this function now works as expected. BZ#1082622 Previously, in certain environments, such as an HP BladeSystem Enclosure with several Blade servers, the kdump kernel could experience a kernel panic or become unresponsive during boot due to lack of available interrupt vectors. As a consequence, kdump failed to capture a core dump. To increase a number of available interrupt vectors, the kdump kernel can boot up with more CPUs. However, the kdump kernel always tries to boot up with the bootstrap processor (BSP), which can cause the kernel to fail to bring up more than one CPU under certain circumstances. This update introduces a new kernel parameter, disable_cpu_acipid, which allows the kdump kernel to disable BSP during boot and then to successfully boot up with multiple processors. This resolves the problem of lack of available interrupt vectors for systems with a high number of devices and ensures that kdump can now successfully capture a core dump on these systems. BZ#1091826 A patch to the kernel scheduler fixed a kernel panic caused by a divide-by-zero bug in the init_numa_sched_groups_power() function. However, that patch introduced a regression on systems with standard Non-Uniform Memory Access (NUMA) topology so that cpu_power in all but one NUMA domains was set to twice the expected value. This resulted in incorrect task scheduling and some processors being left idle even though there were enough queued tasks to handle, which had a negative impact on system performance. This update ensures that cpu_power on systems with standard NUMA topology is set to expected values by adding an estimate to cpu_power for every uncounted CPU.Task scheduling now works as expected on these systems without performance issues related to the said bug. BZ#1092870 The RTM_NEWLINK messages can contain information about every virtual function (VF) for the given network interface (NIC) and can become very large if this information is not filtered. Previously, the kernel netlink interface allowed the getifaddr() function to process RTM_NEWLINK messages with unfiltered content. Under certain circumstances, the kernel netlink interface would omit data for the given group of NICs, causing getifaddr() to loop indefinitely being unable to return information about the affected NICs. This update resolves this problem by supplying only the RTM_NEWLINK messages with filtered content. BZ#1063508 The ext4_releasepage() function previously emitted an unnecessary warning message when it was passed a page with the PageChecked flag set. To avoid irrelevant warnings in the kernel log, this update removes the related WARN_ON() from the ext4 code. BZ#1070296 Microsoft Windows 7 KVM guests could become unresponsive during reboot because KVM did not manage to inject an Non-Maskable Interrupt (NMI) to the guest when handling page faults. To resolve this problem, a series of patches has been applied to the KVM code, ensuring that KVM handles page faults during the reboot of the guest machine as expected. BZ#1096711 The turbostat utility produced error messages when used on systems with the fourth generation of Intel Core Processors. To fix this problem, the kernel has been updated to provide the C-state residency information for the C8, C9, and C10 C-states. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.82.6. RHSA-2014-0475 - Important: kernel security and bug fix update Updated kernel packages that fix three security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-2523 , Important A flaw was found in the way the Linux kernel's netfilter connection tracking implementation for Datagram Congestion Control Protocol (DCCP) packets used the skb_header_pointer() function. A remote attacker could use this flaw to send a specially crafted DCCP packet to crash the system or, potentially, escalate their privileges on the system. CVE-2014-6383 , Moderate A flaw was found in the way the Linux kernel's Adaptec RAID controller (aacraid) checked permissions of compat IOCTLs. A local attacker could use this flaw to bypass intended security restrictions. CVE-2014-0077 , Moderate A flaw was found in the way the handle_rx() function handled large network packets when mergeable buffers were disabled. A privileged guest user could use this flaw to crash the host or corrupt QEMU process memory on the host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process. The CVE-2014-0077 issue was discovered by Michael S. Tsirkin of Red Hat. Bug Fixes BZ#1078007 Due to recent changes in the Linux memory management, the kernel did not properly handle per-CPU LRU page vectors when hot unplugging CPUs. As a consequence, the page vector of the relevant offline CPU kept memory pages for memory accounting. This prevented the libvirtd daemon from removing the relevant memory cgroup directory upon system shutdown, rendering libvirtd unresponsive. To resolve this problem, the Linux memory management now properly flushes memory pages of offline CPUs from the relevant page vectors. BZ#1063201 Recent changes in the d_splice_alias() function introduced a bug that allowed d_splice_alias() to return a dentry from a different directory than was the directory being looked up. As a consequence in cluster environment, a kernel panic could be triggered when a directory was being removed while a concurrent cross-directory operation was performed on this directory on another cluster node. This update avoids the kernel panic in this situation by correcting the search logic in the d_splice_alias() function so that the function can no longer return a dentry from an incorrect directory. BZ#1086095 A system could enter a deadlock situation when the Real-Time (RT) scheduler was moving RT tasks between CPUs and the wakeup_kswapd() function was called on multiple CPUs, resulting in a kernel panic. This problem has been fixed by removing a problematic memory allocation and therefore calling the wakeup_kswapd() function from a deadlock-safe context. BZ#1086007 Previously some device mapper kernel modules, such as dm-thin, dm-space-map-metadata, and dm-bufio, contained various bugs that had adverse effects on their proper functioning. This update backports several upstream patches that resolve these problems, including a fix for the metadata resizing feature of device mapper thin provisioning (thinp) and fixes for read-only mode for dm-thin and dm-bufio. As a result, the aforementioned kernel modules now contain the latest upstream changes and work as expected. BZ#1066535 A change in the TCP code that extended the "proto" struct with a new function, release_cb(), broke integrity of the kernel Application Binary Interface (kABI). If the core stack called a newly introduced pointer to this function for a module that was compiled against older kernel headers, the call resulted in out-of-bounds access and a subsequent kernel panic. To avoid this problem, the core stack has been modified to recognize a newly introduced slab flag, RHEL_EXTENDED_PROTO. This allows the core stack to safely access the release_cb pointer only for modules that support it. BZ#1083350 The Completely Fair Scheduler (CFS) did not verify whether the CFS period timer is running while throttling tasks on the CFS run queue. Therefore under certain circumstances, the CFS run queue became stuck because the CFS period timer was inactive and could not be restarted. To fix this problem, the CFS now restarts the CFS period timer inside the throttling function if it is inactive. BZ#1073562 A change removed the ZONE_RECLAIM_LOCKED flag from Linux memory management code in order to fix a NUMA node allocation problem in the memory zone reclaim logic. However, the flag removal allowed concurrent page reclaiming within one memory zone, which, under heavy system load, resulted in unwanted spin lock contention and subsequent performance problems (systems became slow or unresponsive). This update resolves this problem by preventing reclaim threads from scanning a memory zone if the zone does not satisfy scanning requirements. Systems under heavy load no longer suffer from CPU overloading but sustain their expected performance. BZ#1073564 The restart logic for the memory reclaiming with compaction was previously applied on the level of LRU page vectors. This could, however, cause significant latency in memory allocation because memory compaction does not require only memory pages of a certain cgroup but a whole memory zone. This performance issue has been fixed by moving the restart logic to the zone level and restarting the memory reclaim for all memory cgroups in a zone when the compaction requires more free pages from the zone. BZ#1074855 Previously, the for_each_isci_host() macro was incorrectly defined so it accessed an out-of-range element for a 2-element array. This macro was also wrongly optimized by GCC 4.8 so that it was executed too many times on platforms with two SCU controllers. As a consequence, the system triggered a kernel panic when entering the S3 state, or a kernel oops when removing the isci module. This update corrects the aforementioned macro and the described problems no longer occur. BZ#1083175 A bug in the vmxnet3 driver allowed potential race conditions to be triggered when the driver was used with the netconsole module. The race conditions allowed the driver's internal NAPI poll routine to run concurrently with the netpoll controller routine, which resulted in data corruption and a subsequent kernel panic. To fix this problem, the vmxnet3 driver has been modified to call the appropriate interrupt handler to schedule NAPI poll requests properly. BZ#1081908 The kernel task scheduler could trigger a race condition while migrating tasks over CPU cgroups. The race could result in accessing a task that pointed to an incorrect parent task group, causing the system to behave unpredictably, for example to appear being unresponsive. This problem has been resolved by ensuring that the correct task group information is properly stored during the task's migration. BZ#1076056 A previously backported patch to the XFS code added an unconditional call to the xlog_cil_empty() function. If the XFS file system was mounted with the unsupported nodelaylog option, that call resulted in access to an uninitialized spin lock and a consequent kernel panic. To avoid this problem, the nodelaylog option has been disabled; the option is still accepted but has no longer any effect. (The nodelaylog mount option was originally intended only as a testing option upstream, and has since been removed.) BZ#1076242 The SCTP sctp_connectx() ABI did not work properly for 64-bit kernels compiled with 32-bit emulation. As a consequence, applications utilizing the sctp_connectx() function did not run in this case. To fix this problem, a new ABI has been implemented; the COMPAT ABI enables to copy and transform user data from a COMPAT-specific structure to a SCTP-specific structure. Applications that require sctp_connectx() now work without any problems on a system with a 64-bit kernel compiled with 32-bit emulation. BZ#1085660 A bug in the qla2xxx driver caused the kernel to crash. This update resolves this problem by fixing an incorrect condition in the "for" statement in the qla2x00_alloc_iocbs() function. BZ#1079870 The code responsible for creating and binding of packet sockets was not optimized and therefore applications that utilized the socket() and bind() system calls did not perform as expected. A patch has been applied to the packet socket code so that latency for socket creating and binding is now significantly lower in certain cases. BZ#1077874 Previously, the vmw_pwscsi driver could attempt to complete a command to the SCSI mid-layer after reporting a successful abort of the command. This led to a double completion bug and a subsequent kernel panic. This update ensures that the pvscsi_abort() function returns SUCCESS only after the abort is completed, preventing the driver from invalid attempts to complete the command. BZ#1085658 Due to a bug in the mlx4_en module, a data structure related to time stamping could be accessed before being initialized. As a consequence, loading mlx4_en could result in a kernel crash. This problem has been fixed by moving the initiation of the time stamp mechanism to the correct place in the code. BZ#1078011 Due to a change that was refactoring the Generic Routing Encapsulation (GRE) tunneling code, the ip_gre module did not work properly. As a consequence, GRE interfaces dropped every packet that had the Explicit Congestion Notification (ECN) bit set and did not have the ECN-Capable Transport (ECT) bit set. This update reintroduces the ipgre_ecn_decapsulate() function that is now used instead of the IP_ECN_decapsulate() function that was not properly implemented. The ip_gre module now works correctly and GRE devices process all packets as expected. BZ#1078641 A bug in the megaraid_sas driver could cause the driver to read the hardware status values incorrectly. As a consequence, the RAID card was disabled during the system boot and the system could fail to boot. With this update, the megaraid_sas driver has been corrected to enable the RAID card on system boot as expected. BZ#1081907 A bug in the Completely Fair Scheduler (CFS) could, under certain circumstances, trigger a race condition while moving a forking task between cgroups. This race could lead to a free-after-use error and a subsequent kernel panic when a child task was accessed while it was pointing to a stale cgroup of its parent task. A patch has been applied to the CFS to ensure that a child task always points to the valid parent's task group. BZ#1078874 The Red Hat GFS2 file system previously limited a number of ACL entries per inode to 25. However, this number was insufficient in some cases, causing the setfacl command to fail. This update increases this limit to maximum of 300 ACL entries for the 4 KB block size. If the block size is smaller, this value is adjusted accordingly. BZ#1085358 patches to the CIFS code introduced a regression that prevented users from mounting a CIFS share using the NetBIOS over TCP service on the port 139. This problem has been fixed by masking off the top byte in the get_rfc1002_length() function. BZ#1079872 Previously, user space packet capturing libraries, such as libcap, had a limited possibility to determine which Berkeley Packet Filter (BPF) extensions are supported by the current kernel. This limitation had a negative effect on VLAN packet filtering that is performed by the tcpdump utility and tcpdump sometimes was not able to capture filtered packets correctly. Therefore, this update introduces a new option, SO_BPF_EXTENSIONS, which can be specified as an argument of the getsockopt() function. This option enables packet capturing tools to obtain information about which BPF extensions are supported by the current kernel. As a result, the tcpdump utility can now capture packets properly. BZ#1080600 The isci driver previously triggered an erroneous BUG_ON() assertion in case of a hard reset timeout in the sci_apc_agent_link_up() function. If a SATA device was unable to restore the link in time after the reset, the isci port had to return to the "awaiting link-up" state. However in such a case, the port may not have been in the "resetting" state, causing a kernel panic. This problem has been fixed by removing that incorrect BUG_ON() assertion. BZ#1078798 Previously, when removing an IPv6 address from an interface, unreachable routes related to that address were not removed from the IPv6 routing table. This happened because the IPv6 code used inappropriate function when searching for the routes. To avoid this problem, the IPv6 code has been modified to use the ip6_route_lookup() function instead of rt6_lookup() in this situation. All related routes are now properly deleted from the routing tables when an IPv6 address is removed. BZ#1075651 If the BIOS returned a negative value for the critical trip point for the given thermal zone during a system boot, the whole thermal zone was invalidated and an ACPI error was printed. However, the thermal zone may still have been needed for cooling. With this update, the ACPI thermal management has been modified to only disable the relevant critical trip point in this situation. BZ#1075554 When allocating kernel memory, the SCSI device handlers called the sizeof() function with a structure name as its argument. However, the modified files were using an incorrect structure name, which resulted in an insufficient amount of memory being allocated and subsequent memory corruption. This update modifies the relevant sizeof() function calls to rather use a pointer to the structure instead of the structure name so that the memory is now always allocated correctly. BZ#1069848 A change that modified the linkat() system call introduced a mount point reference leak and a subsequent memory leak in case that a file system link operation returned the ESTALE error code. These problems have been fixed by properly freeing the old mount point reference in such a case. BZ#1086490 The dm-bufio driver did not call the blk_unplug() function to flush plugged I/O requests. Therefore, the requests submitted by dm-bufio were delayed by 3 ms, which could cause performance degradation. With this update, dm-bufio calls blk_unplug() as expected, avoiding any related performance issues. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.82.7. RHSA-2014:0328 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-0055 , Important A flaw was found in the way the get_rx_bufs() function in the vhost_net implementation in the Linux kernel handled error conditions reported by the vhost_get_vq_desc() function. A privileged guest user could use this flaw to crash the host. CVE-2014-0101 , Important A flaw was found in the way the Linux kernel processed an authenticated COOKIE_ECHO chunk during the initialization of an SCTP connection. A remote attacker could use this flaw to crash the system by initiating a specially crafted SCTP handshake in order to trigger a NULL pointer dereference on the system. CVE-2014-0069 , Moderate A flaw was found in the way the Linux kernel's CIFS implementation handled uncached write operations with specially crafted iovec structures. An unprivileged local user with access to a CIFS share could use this flaw to crash the system, leak kernel memory, or, potentially, escalate their privileges on the system. Note: the default cache settings for CIFS mounts on Red Hat Enterprise Linux 6 prohibit a successful exploitation of this issue. CVE-2013-1860 , Low A heap-based buffer overflow flaw was found in the Linux kernel's cdc-wdm driver, used for USB CDC WCM device management. An attacker with physical access to a system could use this flaw to cause a denial of service or, potentially, escalate their privileges. Red Hat would like to thank Nokia Siemens Networks for reporting CVE-2014-0101, and Al Viro for reporting CVE-2014-0069. Bug Fixes BZ#1063507 A change in the Advanced Programmable Interrupt Controller (APIC) code caused a regression on certain Intel CPUs using a Multiprocessor (MP) table. An attempt to read from the local APIC (LAPIC) could be performed before the LAPIC was mapped, resulting in a kernel crash during a system boot. A patch has been applied to fix this problem by mapping the LAPIC as soon as possible when parsing the MP table. BZ#1067775 When removing an inode from a name space on an XFS file system, the file system could enter a deadlock situation and become unresponsive. This happened because the removal operation incorrectly used the AGF and AGI locks in the opposite order than was required by the ordering constraint, which led to a possible deadlock between the file removal and inode allocation and freeing operations. With this update, the inode's reference count is dropped before removing the inode entry with the first transaction of the removal operation. This ensures that the AGI and AGF locks are locked in the correct order, preventing any further deadlocks in this scenario. BZ#1064913 Previously, the GFS2 kernel module leaked memory in the gfs2_bufdata slab cache and allowed a use-after-free race condition to be triggered in the gfs2_remove_from_journal() function. As a consequence after unmounting the GFS2 file system, the GFS2 slab cache could still contain some objects, which subsequently could, under certain circumstances, result in a kernel panic. A series of patches has been applied to the GFS2 kernel module, ensuring that all objects are freed from the slab cache properly and the kernel panic is avoided. BZ#1054072 Due to the locking mechanism that the kernel used while handling Out of Memory (OOM) situations in memory control groups (cgroups), the OOM killer did not work as intended in case that many processes triggered an OOM. As a consequence, the entire system could become or appear to be unresponsive. A series of patches has been applied to improve this locking mechanism so that the OOM killer now works as expected in memory cgroups under heavy OOM load. BZ#1055364 Previously, certain SELinux functions did not correctly handle the TCP synchronize-acknowledgment (SYN-ACK) packets when processing IPv4 labeled traffic over an INET socket. The initial SYN-ACK packets were labeled incorrectly by SELinux, and as a result, the access control decision was made using the server socket's label instead of the new connection's label. In addition, SELinux was not properly inspecting outbound labeled IPsec traffic, which led to similar problems with incorrect access control decisions. A series of patches that addresses these problems has been applied to SELinux. The initial SYN-ACK packets are now labeled correctly and SELinux processes all SYN-ACK packets as expected. BZ#1063199 In Red Hat Enterprise Linux 6.5, the TCP Segmentation Offload (TSO) feature is automatically disabled if the corresponding network device does not report any CSUM flag in the list of its features. Previously, VLAN devices that were configured over bonding devices did not propagate its NETIF_F_NO_CSUM flag as expected, and their feature lists thus did not contain any CSUM flags. As a consequence, the TSO feature was disabled for these VLAN devices, which led to poor bandwidth performance. With this update, the bonding driver propagates the aforementioned flag correctly so that network traffic now flows through VLAN devices over bonding without any performance problems. BZ#1064464 Due to a bug in the Infiniband driver, the ip and ifconfig utilities reported the link status of the IP over Infiniband (IPoIB) interfaces incorrectly (as "RUNNING" in case of "ifconfig", and as "UP" in case of "ip") even if no cable was connected to the respective network card. The problem has been corrected by calling the respective netif_carrier_off() function on the right place in the code. The link status of the IPoIB interfaces is now reported correctly in the described situation. BZ#1058418 When performing read operations on an XFS file system, failed buffer readahead can leave the buffer in the cache memory marked with an error. This could lead to incorrect detection of stale errors during completion of an I/O operation because most callers do not zero out the b_error field of the buffer on a subsequent read. To avoid this problem and ensure correct I/O error detection, the b_error field of the used buffer is now zeroed out before submitting an I/O operation on a file. BZ#1062113 Previously, when hot adding memory to the system, the memory management subsystem always performed unconditional page-block scans for all memory sections being set online. The total duration of the hot add operation depends on both, the size of memory that the system already has and the size of memory that is being added. Therefore, the hot add operation took an excessive amount of time to complete if a large amount of memory was added or if the target node already had a considerable amount of memory. This update optimizes the code so that page-block scans are performed only when necessary, which greatly reduces the duration of the hot add operation. BZ#1059991 Due to a bug in the SELinux socket receive hook, network traffic was not dropped upon receiving a peer:recv access control denial on some configurations. A broken labeled networking check in the SELinux socket receive hook has been corrected, and network traffic is now properly dropped in the described case. BZ#1060491 When transferring a large amount of data over the peer-to-peer (PPP) link, a rare race condition between the throttle() and unthrottle() functions in the tty driver could be triggered. As a consequence, the tty driver became unresponsive, remaining in the throttled state, which resulted in the traffic being stalled. Also, if the PPP link was heavily loaded, another race condition in the tty driver could has been triggered. This race allowed an unsafe update of the available buffer space, which could also result in the stalled traffic. A series of patches addressing both race conditions has been applied to the tty driver; if the first race is triggered, the driver loops and forces re-evaluation of the respective test condition, which ensures uninterrupted traffic flow in the described situation. The second race is now completely avoided due to a well-placed read lock, and the update of the available buffer space proceeds correctly. BZ#1058420 Previously, the e752x_edac module incorrectly handled the pci_dev usage count, which could reach zero and deallocate a PCI device structure. As a consequence, a kernel panic could occur when the module was loaded multiple times on some systems. This update fixes the usage count that is triggered by loading and unloading the module repeatedly, and a kernel panic no longer occurs. BZ#1057165 When a page table is upgraded, a new top level of the page table is added for the virtual address space, which results in a new Address Space Control Element (ASCE). However, the Translation Lookaside Buffer (TLB) of the virtual address space was not previously flushed on page table upgrade. As a consequence, the TLB contained entries associated with the old ASCE, which led to unexpected program failures and random data corruption. To correct this problem, the TLB entries associated with the old ASCE are now flushed as expected upon page table upgrade. BZ#1064115 When a network interface is running in promiscuous (PROMISC) mode, the interface may receive and process VLAN-tagged frames even though no VLAN is attached to the interface. However, the enic driver did not handle processing of the packets with the VLAN-tagged frames in PROMISC mode correctly if the frames had no VLAN group assigned, which led to various problems. To handle the VLAN-tagged frames without a VLAN group properly, the frames have to be processed by the VLAN code, and the enic driver thus no longer verifies whether the packet's VLAN group field is empty. BZ#1057164 A change in the Linux memory management on IBM System z removed the handler for the Address Space Control Element (ASCE) type of exception. As a consequence, the kernel was unable to handle ASCE exceptions, which led to a kernel panic. Such an exception was triggered, for example, if the kernel attempted to access user memory with an address that was larger than the current page table limit from a user-space program. This problem has been fixed by calling the standard page fault handler, do_dat_exception, if an ASCE exception is raised. BZ#1063271 Due to several bugs in the network console logging, a race condition between the network console send operation and the driver's IRQ handler could occur, or the network console could access invalid memory content. As a consequence, the respective driver, such as vmxnet3, triggered a BUG_ON() assertion and the system terminated unexpectedly. A patch addressing these bugs has been applied so that driver's IRQs are disabled before processing the send operation and the network console now accesses the RCU-protected (read-copy update) data properly. Systems using the network console logging no longer crashes due to the aforementioned conditions. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.82.8. RHSA-2014:0159 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2013-6381 , Important A buffer overflow flaw was found in the way the qeth_snmp_command() function in the Linux kernel's QETH network device driver implementation handled SNMP IOCTL requests with an out-of-bounds length. A local, unprivileged user could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2013-2929 , Low A flaw was found in the way the get_dumpable() function return value was interpreted in the ptrace subsystem of the Linux kernel. When 'fs.suid_dumpable' was set to 2, a local, unprivileged local user could use this flaw to bypass intended ptrace restrictions and obtain potentially sensitive information. CVE-2013-7263 , CVE-2013-7265 , Low It was found that certain protocol handlers in the Linux kernel's networking implementation could set the addr_len value without initializing the associated data structure. A local, unprivileged user could use this flaw to leak kernel stack memory to user space using the recvmsg, recvfrom, and recvmmsg system calls. Bug Fixes BZ#1051393 Due to a bug in the NFS code, the state manager and the DELEGRETURN operation could enter a deadlock if an asynchronous session error was received while DELEGRETURN was being processed by the state manager. The state manager became unable to process the failing DELEGRETURN operation because it was waiting for an asynchronous RPC task to complete, which could not have been completed because the DELEGRETURN operation was cycling indefinitely with session errors. A series of patches has been applied to ensure that the asynchronous error handler waits for recovery when a session error is received and the deadlock no longer occurs. BZ#1049590 The IPv4 and IPv6 code contained several issues related to the conntrack fragmentation handling that prevented fragmented packages from being properly reassembled. This update applies a series of patches and ensures that MTU discovery is handled properly, and fragments are correctly matched and packets reassembled. BZ#1046043 Inefficient usage of Big Kernel Locks (BKLs) in the ptrace() system call could lead to BKL contention on certain systems that widely utilize ptrace(), such as User-mode Linux (UML) systems, resulting in degraded performance on these systems. This update removes the relevant BKLs from the ptrace() system call, thus resolving any related performance issues. BZ#1046041 When utilizing SCTP over the bonding device in Red Hat Enterprise Linux 6.5, SCTP assumed offload capabilities on virtual devices where it was not guaranteed that underlying physical devices are equipped with these capabilities. As a consequence, checksums of the outgoing packets became corrupted and a network connection could not be properly established. A patch has been applied to ensure that checksums of the packages to the devices without SCTP checksum capabilities are properly calculated in software fallback. SCTP connections over the bonding devices can now be established as expected in Red Hat Enterprise Linux 6.5. BZ#1044566 The context of the user's process could not be previously saved on PowerPC platforms if the VSX Machine State Register (MSR) bit was set but the user did not provide enough space to save the VSX state. This update allows to clear the VSX MSR bit in such a situation, indicating that there is no valid VSX state in the user context. BZ#1043779 After a statically defined gateway became unreachable and its corresponding neighbor entry entered a FAILED state, the gateway stayed in the FAILED state even after it became reachable again. As a consequence, traffic was not routed through that gateway. This update enables probing such a gateway automatically so that the traffic can be routed through this gateway again once it becomes reachable. BZ#1040826 Due to several bugs in the IPv6 code, a soft lockup could occur when the number of cached IPv6 destination entries reached the garbage collector treshold on a high-traffic router. A series of patches has been applied to address this problem. These patches ensure that the route probing is performed asynchronously to prevent a dead lock with garbage collection. Also, the garbage collector is now run asynchronously, preventing CPUs that concurrently requested the garbage collector from waiting until all other CPUs finish the garbage collection. As a result, soft lockups no longer occur in the described situation. BZ#1035347 A change to the md driver disabled the TRIM operation for RAID5 volumes in order to prevent a possible kernel oops. However, if a MD RAID volume was reshaped to a different RAID level, this could result in TRIM being disabled on the resulting volume, as the RAID4 personality is used for certain reshapes. A patch has been applied that corrects this problem by setting the stacking limits before changing a RAID level, and thus ensuring the correct discard (TRIM) granularity for the RAID array. BZ#1051395 NFS previously allowed a race between "silly rename" operations and the rmdir() function to occur when removing a directory right after an unlinked file in the directory was closed. As a result, rmdir() could fail with an EBUSY error. This update applies a patch ensuring that NFS waits for any asynchronous operations to complete before performing the rmdir() operation. BZ#1051394 Due to a bug in the EDAC driver, the driver failed to decode and report errors on AMD family 16h processors correctly. This update incorporates a missing case statement to the code so that the EDAC driver now handles errors as expected. BZ#1045094 A deadlock between the state manager, kswapd daemon, and the sys_open() function could occur when the state manager was recovering from an expired state and recovery OPEN operations were being processed. To fix this problem, NFS has been modified to ignore all errors from the LAYOUTRETURN operation (a pNFS operation) except for "NFS4ERR_DELAY" in this situation. BZ#1040498 The bnx2x driver handled unsupported TLVs received from a Virtual Function (VF) using the VF-PF channel incorrectly; when a driver of the VF sent a known but unsupported TLV command to the Physical Function, the driver of the PF did not reply. As a consequence, the VF-PF channel was left in an unstable state and the VF eventually timed out. A patch has been applied to correct the VF-PF locking scheme so that unsupported TLVs are properly handled and responded to by the PF side. Also, unsupported TLVs could previously render a mutex used to lock the VF-PF operations. The mutex then stopped protecting critical sections of the code, which could result in error messages being generated when the PF received additional TLVs from the VF. A patch has been applied that corrects the VF-PF channel locking scheme, and unsupported TLVs thus can no longer break the VF-PF lock. BZ#1040497 A bug in the statistics flow in the bnx2x driver caused the card's DMA Engine (DMAE) to be accessed without taking a necessary lock. As a consequence, previously queued DMAE commands could be overwritten and the Virtual Functions then could timeout on requests to their respective Physical Functions. The likelihood of triggering the bug was higher with more SR-IOV Virtual Functions configured. Overwriting of the DMAE commands could also result in other problems even without using SR-IOV. This update ensures that all flows utilizing DMAE will use the same API and the proper locking scheme is kept by all these flows. BZ#1035339 When starting or waking up a system that utilized an AHCI controller with empty ports, and the EM transmit bit was busy, the AHCI driver incorrectly released the related error handler before initiation of the sleep operation. As a consequence, the error handler could be acquired by a different port of the AHCI controller and the Serial General Purpose Input/Output (SGPIO) signal could eventually blink the rebuild pattern on an empty port. This update implements cross-port error handler exclusion to the generic ATA driver and the AHCI driver has been modified to use the msleep() function in this particular case. The error handler is no longer released upon the sleep operation and the SGPIO signal can no longer indicate the disk's rebuild on the empty controller's slot. BZ#1032389 changes to the igb driver caused the ethtool utility to determine and display some capabilities of the Ethernet devices incorrectly. This update fixes the igb driver so that the actual link capabilities are now determined properly, and ethtool displays values as accurate as possible in dependency on the data available to the driver. All users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 8.82.9. RHSA-2013:1801 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues, several bugs, and add two enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2013-4470 , Important A flaw was found in the way the Linux kernel's TCP/IP protocol suite implementation handled sending of certain UDP packets over sockets that used the UDP_CORK option when the UDP Fragmentation Offload (UFO) feature was enabled on the output device. A local, unprivileged user could use this flaw to cause a denial of service or, potentially, escalate their privileges on the system. CVE-2013-6367 , Important A divide-by-zero flaw was found in the apic_get_tmcct() function in KVM's Local Advanced Programmable Interrupt Controller (LAPIC) implementation. A privileged guest user could use this flaw to crash the host. CVE-2013-6368 , Important A memory corruption flaw was discovered in the way KVM handled virtual APIC accesses that crossed a page boundary. A local, unprivileged user could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2013-2141 , Low An information leak flaw in the Linux kernel could allow a local, unprivileged user to leak kernel memory to user space. Red Hat would like to thank Hannes Frederic Sowa for reporting CVE-2013-4470, and Andrew Honig of Google for reporting CVE-2013-6367 and CVE-2013-6368. Bug Fixes BZ#1027343 Due to a regression bug in the mlx4 driver, Mellanox mlx4 adapters could become unresponsive on heavy load along with IOMMU allocation errors being logged to the systems logs. A patch has been applied to the mlx4 driver so that the driver now calculates the last memory page fragment when allocating memory in the Rx path. BZ#1028278 A bug in the RSXX DMA handling code allowed DISCARD operations to call the pci_unmap_page() function, which triggered a race condition on the PowerPC architecture when DISCARD, READ, and WRITE operations were issued simultaneously. However, DISCARD operations are always assigned a DMA address of 0 because they are never mapped. Therefore, this race could result in freeing memory that was mapped for another operation and a subsequent EEH event. A patch has been applied, preventing the DISCARD operations from calling pci_unmap_page(), and thus avoiding the aforementioned race condition. BZ#1029330 Due to a missing part of the bcma driver, the brcmsmac kernel module did not have a list of internal aliases that was needed by the kernel to properly handle the related udev events. Consequently, when the bcma driver scanned for the devices at boot time, these udev events were ignored and the kernel did not load the brcmsmac module automatically. A patch that provides missing aliases has been applied so that the udev requests of the brcmsmac module are now handled as expected and the kernel loads the brcmsmac module automatically on boot. BZ#1029997 A bug in the mlx4 driver could trigger a race between the "blue flame" feature's traffic flow and the stamping mechanism in the Tx ring flow when processing Work Queue Elements (WQEs) in the Tx ring. Consequently, the related queue pair (QP) of the mlx4 Ethernet card entered an error state and the traffic on the related Tx ring was blocked. A patch has been applied to the mlx4 driver so that the driver does not stamp the last completed WQE in the Tx ring, and thus avoids the aforementioned race. BZ#1030171 A change in the NFSv4 code resulted in breaking the sync NFSv4 mount option. A patch has been applied that restores functionality of the sync mount option. BZ#1030713 Due to a bug in the Emulex lpfc driver, the driver could not allocate a SCSI buffer properly, which resulted in severe performance degradation of lpfc adapters on 64-bit PowerPC systems. A patch addressing this problem has been applied so that lpfc allocates the SCSI buffer correctly and lpfc adapters now work as expected on 64-bit PowerPC systems. BZ#1032162 When performing I/O operations on a heavily-fragmented GFS2 file system, significant performance degradation could occur. This was caused by the allocation strategy that GFS2 used to search for an ideal contiguous chunk of free blocks in all the available resource groups (rgrp). A series of patches has been applied that improves performance of GFS2 file systems in case of heavy fragmentation. GFS2 now allocates the biggest extent found in the rgrp if it fulfills the minimum requirements. GFS2 has also reduced the amount of bitmap searching in case of multi-block reservations by keeping track of the smallest extent for which the multi-block reservation would fail in the given rgrp. This improves GFS2 performance by avoiding unnecessary rgrp free block searches that would fail. Additionally, this patch series fixes a bug in the GFS2 block allocation code where a multi-block reservation was not properly removed from the rgrp's reservation tree when it was disqualified, which eventually triggered a BUG_ON() macro due to an incorrect count of reserved blocks. BZ#1032167 An earlier patch to the kernel added the dynamic queue depth throttling functionality to the QLogic's qla2xxx driver that allowed the driver to adjust queue depth for attached SCSI devices. However, the kernel might have crashed when having this functionality enabled in certain environments, such as on systems with EMC PowerPath Multipathing installed that were under heavy I/O load. To resolve this problem, the dynamic queue depth throttling functionality has been removed from the qla2xxx driver. BZ#1032168 Previously, devices using the ixgbevf driver that were assigned to a virtual machine could not adjust their Jumbo MTU value automatically if the Physical Function (PF) interface was down; when the PF device was brought up, the MTU value on the related Virtual Function (VF) device was set incorrectly. This was caused by the way the communication channel between PF and VF interfaces was set up and the first negotiation attempt between PF and VF was made. To fix this problem, structural changes to the ixgbevf driver have been made so that the kernel can now negotiate the correct API between PF and VF successfully and the MTU value is now set correctly on the VF interface in this situation. BZ#1032170 A bug in the ixgbe driver caused that IPv6 hardware filtering tables were not correctly rewritten upon interface reset when using a bridge device over the PF interface in an SR-IOV environment. As a result, the IPv6 traffic between VFs was interrupted. An upstream patch has been backported to modify the ixgbe driver so that the update of the Multimedia Terminal Adapter (MTA) table is now unconditional, avoiding possible inconsistencies in the MTA table upon PF's reset. The IPv6 traffic between VFs proceeds as expected in this scenario. BZ#1032247 When using Haswell HDMI audio controllers with an unaligned DMA buffer size, these audio controllers could become locked up until the reboot for certain audio stream configurations. A patch has been applied to the Intel's High Definition Audio (HDA) driver that enforces the DMA buffer alignment setting for the Haswell HDMI audio controllers. These audio controllers now work as expected. BZ#1032249 As a result of a recent fix preventing a deadlock upon an attempt to cover an active XFS log, the behavior of the xfs_log_need_covered() function has changed. However, xfs_log_need_covered() is also called to ensure that the XFS log tail is correctly updated as a part of the XFS journal sync operation. As a consequence, when shutting down an XFS file system, the sync operation failed and some files might have been lost. A patch has been applied to ensure that the tail of the XFS log is updated by logging a dummy record to the XFS journal. The sync operation completes successfully and files are properly written to the disk in this situation. BZ#1032250 A chunk of a patch was left out when backporting a batch of patches that fixed an infinite loop problem in the LOCK operation with zero state ID during NFSv4 state ID recovery. As a consequence, the system could become unresponsive on numerous occasions. The missing chunk of the patch has been added, resolving this hang issue. BZ#1032260 When performing buffered WRITE operations from multiple processes to a single file, the NFS code previously always verified whether the lock owner information is identical for the file being accessed even though no file locks were involved. This led to performance degradation because forked child processes had to synchronize dirty data written to a disk by the parent process before writing to a file. Also, when coalescing requests into a single READ or WRITE RPC call, NFS refused the request if the lock owner information did not match for the given file even though no file locks were involved. This also caused performance degradation. A series of patches has been applied that relax relevant test conditions so that lock owner compatibility is no longer verified in the described cases, which resolves these performance issues. BZ#1032395 Due to a bug in the mlx4 driver, Mellanox Ethernet cards were brought down unexpectedly while adjusting their Tx or Rx ring. A patch has been applied so that the mlx4 driver now properly verifies the state of the Ethernet card when the coalescing of the Tx or Rx ring is being set, which resolves this problem. BZ#1032423 When the system was under memory stress, a double-free bug in the tg3 driver could have been triggered, resulting in a NIC being brought down unexpectedly followed by a kernel panic. A patch has been applied that restructures the respective code so that the affected ring buffer is freed correctly. BZ#1032424 The RPC client always retransmitted zero-copy of the page data if it timed out before the first RPC transmission completed. However, such a retransmission could cause data corruption if using the O_DIRECT buffer and the first RPC call completed while the respective TCP socket still held a reference to the pages. To prevent the data corruption, retransmission of the RPC call is, in this situation, performed using the sendmsg() function. The sendmsg() function retransmits an authentic reproduction of the first RPC transmission because the TCP socket holds the full copy of the page data. BZ#1032688 When creating an XFS file system, an attempt to cover an active XFS log could, under certain circumstances, result in a deadlock between the xfssyncd and xfsbufd daemons. Consequently, several kernel threads became unresponsive and the XFS file system could not have been successfully created, leading to a kernel oops. A patch has been applied to prevent this situation by forcing the active XFS log onto a disk. Enhancements BZ#1020518 The kernel now supports memory configurations with more than 1TB of RAM on AMD systems. BZ#1032426 The kernel has been modified to stop reporting ABS_MISC events on Wacom touch devices in order to ensure that the devices are correctly recognized by the HAL daemon. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add these enhancements. The system must be rebooted for this update to take effect. 8.82.10. RHSA-2013:1645 - Important: Red Hat Enterprise Linux 6 kernel update Updated kernel packages that fix multiple security issues, address several hundred bugs, and add numerous enhancements are now available as part of the ongoing support and maintenance of Red Hat Enterprise Linux version 6. This is the fifth regular update. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2013-4387 , Important A flaw was found in the way the Linux kernel's IPv6 implementation handled certain UDP packets when the UDP Fragmentation Offload (UFO) feature was enabled. A remote attacker could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2013-0343 , Moderate A flaw was found in the way the Linux kernel handled the creation of temporary IPv6 addresses. If the IPv6 privacy extension was enabled (/proc/sys/net/ipv6/conf/eth0/use_tempaddr set to '2'), an attacker on the local network could disable IPv6 temporary address generation, leading to a potential information disclosure. CVE-2013-2888 , Moderate A flaw was found in the way the Linux kernel handled HID (Human Interface Device) reports with an out-of-bounds Report ID. An attacker with physical access to the system could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2013-4345 , Moderate An off-by-one flaw was found in the way the ANSI CPRNG implementation in the Linux kernel processed non-block size aligned requests. This could lead to random numbers being generated with less bits of entropy than expected when ANSI CPRNG was used. CVE-2013-4591 , Moderate It was found that the fix for CVE-2012-2375 released via RHSA-2012:1580 accidentally removed a check for small-sized result buffers. A local, unprivileged user with access to an NFSv4 mount with ACL support could use this flaw to crash the system or, potentially, escalate their privileges on the system . CVE-2013-4592 , Moderate A flaw was found in the way IOMMU memory mappings were handled when moving memory slots. A malicious user on a KVM host who has the ability to assign a device to a guest could use this flaw to crash the host. CVE-2013-2889 , CVE-2013-2892 , Moderate Heap-based buffer overflow flaws were found in the way the Zeroplus and Pantherlord/GreenAsia game controllers handled HID reports. An attacker with physical access to the system could use these flaws to crash the system or, potentially, escalate their privileges on the system. CVE-2012-6542 , CVE-2013-3231 , Low Two information leak flaws were found in the logical link control (LLC) implementation in the Linux kernel. A local, unprivileged user could use these flaws to leak kernel stack memory to user space. CVE-2013-1929 , Low A heap-based buffer overflow in the way the tg3 Ethernet driver parsed the vital product data (VPD) of devices could allow an attacker with physical access to a system to cause a denial of service or, potentially, escalate their privileges. CVE-2012-6545 , CVE-2013-1928 , CVE-2013-2164 , CVE-2013-2234 , Low Information leak flaws in the Linux kernel could allow a privileged, local user to leak kernel memory to user space. CVE-2013-2851 , Low A format string flaw was found in the Linux kernel's block layer. A privileged, local user could potentially use this flaw to escalate their privileges to kernel level (ring0). Red Hat would like to thank Stephan Mueller for reporting CVE-2013-4345, and Kees Cook for reporting CVE-2013-2851. Bug Fixes BZ# 955712 A function in the RPC code responsible for verifying whether the cached credentials matches the current process did not perform the check correctly. The code checked only whether the groups in the current process credentials appear in the same order as in the cached credential but did not ensure that no other groups are present in the cached credentials. As a consequence, when accessing files in NFS mounts, a process with the same UID and GID as the original process but with a non-matching group list could have been granted an unauthorized access to a file, or under certain circumstances, the process could have been wrongly prevented from accessing the file. The incorrect test condition has been fixed and the problem can no longer occur. BZ# 629857 When the state of the netfilter module was out-of-sync, a TCP connection was recorded in the conntrack table although the TCP connection did not exist between two hosts. If a host re-established this connection with the same source, port, destination port, source address and destination address, the host sent a TCP SYN packet and the peer sent back acknowledgment for this SYN package. However, because netfilter was out-of-sync, netfilter dropped this acknowledgment, and deleted the connection item from the conntrack table, which consequently caused the host to retransmit the SYN packet. A patch has been applied to improve this handling; if an unexpected SYN packet appears, the TCP options are annotated. Acknowledgment for the SYN packet serves as a confirmation of the connection tracking being out-of-sync, then a new connection record is created using the information annotated previously to avoid the retransmission delay. BZ# 955807 Due to several bugs in the ext4 code, data integrity system calls did not always properly persist data on the disk. Therefore, the unsynchronized data in the ext4 file system could have been lost after the system's unexpected termination. A series of patches has been applied to the ext4 code to address this problem, including a fix that ensures proper usage of data barriers in the code responsible for file synchronization. Data loss no longer occurs in the described situation. BZ# 953630 C-states for the Intel Family 6, Model 58 and 62, processors were not properly initialized in Red Hat Enterprise Linux 6. Consequently, these processors were unable to enter deep C-states. Also, C-state accounting was not functioning properly and power management tools, such as powertop or turbostat, thus displayed incorrect C-state transitions. This update applies a patch that ensures proper C-states initialization so the aforementioned processors can now enter deep core power states as expected. Note that this update does not correct C-state accounting which has been addressed by a separate patch. BZ# 953342 The kernel previously did not handle situation where the system needed to fall back from non-flat Advanced Programmable Interrupt Controller (APIC) mode to flat APIC mode. Consequently, a NULL pointer was dereferenced and a kernel panic occurred. This update adds the flat_probe() function to the APIC driver, which allows the kernel using flat APIC mode as a fall-back option. The kernel no longer panics in this situation. BZ# 952785 When attempting to deploy a virtual machine on a hypervisor with multiple NICs and macvtap devices, a kernel panic could occur. This happened because the macvtap driver did not gracefully handle a situation when the macvlan_port.vlans list was empty and returned a NULL pointer. This update applies a series of patches which fix this problem using a read-copy-update (RCU) mechanism and by preventing the driver from returning a NULL pointer if the list is empty. The kernel no longer panics in this scenario. BZ# 952329 Due to a missing structure, the NFSv4 error handler did not handle exceptions caused by revoking NFSv4 delegations. Consequently, the NFSv4 client received the EIO error message instead of the NFS4ERR_ADMIN_REVOKED error. This update modifies the NFSv4 code to no longer require the nfs4_state structure in order to revoke a delegation. BZ# 952174 On KVM guests with the KVM clock (kvmclock) as a clock source and with some VCPUs pinned, certain VCPUs could experience significant sleep delays (elapsed time was greater 20 seconds). This resulted in unexpected delays by sleeping functions and inaccurate measurement for low latency events. The problem happened because a kvmclock update was isolated to a certain VCPU so the NTP frequency correction applied only to that single VCPU. This problem has been resolved by a patch allowing kvmclock updates to all VCPUs on the KVM guest. VCPU sleep time now does not exceed the expected amount and no longer causes the aforementioned problems. BZ# 951937 When using applications that intensively utilized memory mapping, customers experienced significant application latency, which led to serious performance degradation. A series of patches has been applied to fix the problem. Among other, the patches modifies the memory mapping code to allow block devices to require stable page writes, enforce stable page writes only if required by a backing device, and optionally snapshot page content to provide stable pages during write. As a result, application latency has been improved by a considerable amount and applications with high demand of memory mapping now perform as expected. BZ# 997845 The RAID1 and RAD10 code previously called the raise_barrier() and lower_barrier() functions instead of the freeze_array() and unfreeze_array() functions that are safe being called from within the management thread. As a consequence, a deadlock situation could occur if an MD array contained a spare disk, rendering the respective kernel thread unresponsive. Furthermore, if a shutdown sequence was initiated after this problem had occurred, the shutdown sequence became unresponsive and any in-cache file system data that were not synchronized to the disk were lost. A patch correcting this problem has been applied and the RAID1 and RAID10 code now uses management-thread safe functions as expected. BZ# 996802 changes to the Linux kernel network driver code introduced the TCP Small Queues (TSQ) feature. However, these changes led to performance degradation on certain network devices, such as devices using the ixgbe driver. This problem has been fixed by a series of patches to the TCP Segmentation Offload (TSO) and TSQ features that include support for setting the size of TSO frames, and dynamic limit for the number of packet queues on device queues for a given TCP flow. BZ# 950598 If an NFSv4 client was checking open permissions for a delegated OPEN operation during OPEN state recovery of an NFSv4 server, the NFSv4 state manager could enter a deadlock. This happened because the client was holding the NFSv4 sequence ID of the OPEN operation. This problem is resolved by releasing the sequence ID before the client starts checking open permissions. BZ# 983288 NFS previously allowed extending an NFS file write to cover a full page only if the file had not set a byte-range lock. However, extending the write to cover the entire page is sometimes desirable in order to avoid fragmentation inefficiencies. For example, a noticeable performance decrease was reported if a series of small non-contiguous writes was performed on the file. A patch has been applied to the NFS code that allows NFS extending a file write to a full page write if the whole file is locked for writing or if the client holds a write delegation. BZ# 998752 A patch included in kernel version 2.6.32-358.9.1.el6, to fix handling of revoked NFSv4 delegations, introduced a regression bug to the NFSv4 code. This regression in the NFSv4 exception and asynchronous error handling allowed, under certain circumstances, passing a NULL inode to an NFSv4 delegation-related function, which resulted in a kernel panic. The NFSv4 exception and asynchronous error handling has been fixed so that a NULL inode can no longer be passed in this situation. BZ# 947582 XFS file systems were occasionally shut down with the "xfs_trans_ail_delete_bulk: attempting to delete a log item that is not in the AIL" error message. This happened because the EFI/EFD handling logic was incorrect and the EFI log item could have been freed before it was placed in the AIL and committed. A patch has been applied to the XFS code fixing the EFI/EFD handling logic and ensuring that the EFI log items are never freed before the EFD log items are processed. The aforementioned error no longer occurs on an XFS shutdown. BZ# 947275 A bug in the autofs4 mount expiration code could cause the autofs4 module to falsely report a busy tree of NFS mounts as "not in use". Consequently, automount attempted to unmount the tree and failed with a "failed to umount offset" error, leaving the mount tree to appear as empty directories. A patch has been applied to remove an incorrectly used autofs dentry mount check and the aforementioned problem no longer occurs. BZ# 927988 Cyclic adding and removing of the st kernel module could previously cause a system to become unresponsive. This was caused by a disk queue reference count bug in the SCSI tape driver. An upstream patch addressing this bug has been backported to the SCSI tape driver and the system now responds as expected in this situation. BZ# 927918 A update introduced a new failure mode to the blk_get_request() function returning the -ENODEV error code when a block device queue is being destroyed. However, the change did not include a NULL pointer check for all callers of the function. Consequently, the kernel could dereference a NULL pointer when removing a block device from the system, which resulted in a kernel panic. This update applies a patch that adds these missing NULL pointer checks. Also, some callers of the blk_get_request() function could previously return the -ENOMEM error code instead of -ENODEV, which would lead to incorrect call chain propagation. This update applies a patch ensuring that correct return codes are propagated. BZ# 790921 By default, the kernel uses a best-fit algorithm for allocating Virtual Memory Areas (VMAs) to map processed files to the address space. However, if an enormous number of small files (hundreds of thousands or millions) was being mapped, the address space became extremely fragmented, which resulted in significant CPU usage and performance degradation. This update introduces an optional -fit policy which, if enabled, allows for mapping of a file to the first suitable unused area in the address space that follows after the previously allocated VMA. BZ# 960717 A rare race condition between the "devloss" timeout and discovery state machine could trigger a bug in the lpfc driver that nested two levels of spin locks in reverse order. The reverse order of spin locks led to a deadlock situation and the system became unresponsive. With this update, a patch addressing the deadlock problem has been applied and the system no longer hangs in this situation. BZ# 922999 An error in backporting the block reservation feature from upstream resulted in a missing allocation of a reservation structure when an allocation is required during the rename system call. Renaming a file system object (for example, file or directory) requires a block allocation for the destination directory. If the destination directory had not had a reservation structure allocated, a NULL pointer dereference occurred, leading to a kernel panic. With this update, a reservation structure is allocated before the rename operation, and a kernel panic no longer occurs in this scenario. BZ# 805407 A system could become unresponsive due to an attempt to shut down an XFS file system that was waiting for log I/O completion. A patch to the XFS code has been applied that allows for the shutdown method to be called from different contexts so XFS log items can be deleted properly even outside the AIL, which fixes this problem. BZ# 922931 A bug in the dm_btree_remove() function could cause leaf values to have incorrect reference counts. Removal of a shared block could result in space maps considering the block as no longer used. As a consequence, sending a discard request to a shared region of a thin device could corrupt its snapshot. The bug has been fixed to prevent corruption in this scenario. BZ# 980273 A recent change in the memory mapping code introduced a new optional -fit algorithm for allocating VMAs to map processed files to the address space. This change, however, broke behavior of a certain internal function which then always followed the -fit VMA allocation scheme instead of the first-fit VMA allocation scheme. Consequently, when the first-fit VMA allocation scheme was in use, this bug caused linear address space fragmentation and could lead to early "-ENOMEM" failures for mmap() requests. This patch restores the original first-fit behavior to the function so the aforementioned problems no longer occur. BZ# 922779 The GFS2 discard code did not calculate the sector offset correctly for block devices with the sector size of 4 KB, which led to loss of data and metadata on these devices. A patch correcting this problem has been applied so the discard and FITRIM requests now work as expected for the block devices with the 4 KB sector size. BZ# 1002765 A bug in the real-time (RT) scheduler could cause a RT priority process to stop running due to an invalid attribute of the run queue. When a CPU became affected by this bug, the migration kernel thread stopped running on the CPU, and subsequently every other process that was migrated to the affected CPU by the system stopped running as well. A patch has been applied to the RT scheduler and RT priority processes are no longer affected this problem. BZ# 920794 When using the congestion window lock functionality of the ip utility, the system could become unresponsive. This happened because the tcp_slow_start() function could enter an infinite loop if the congestion window was locked using route metrics. A set of patches has been applied to comply with the upstream kernel, ensuring the problem no longer occurs in this scenario. BZ# 978609 A race condition in the abort task and SPP device task management path of the isci driver could, under certain circumstances, cause the driver to fail cleaning up timed-out I/O requests that were pending on an SAS disk device. As a consequence, the kernel removed such a device from the system. A patch applied to the isci driver fixes this problem by sending the task management function request to the SAS drive anytime the abort function is entered and the task has not completed. The driver now cleans up timed-out I/O requests as expected in this situation. BZ# 920672 Due to a race condition in the kernel's DMA initialization code, DMA requests from the hpsa and hpilo drivers could fail with IO_PAGE_FAULT errors during initialization of the AMD iommu driver on AMD systems with the IOMMU feature enabled. To avoid triggering this race condition, the kernel now executes the init_device_table_dma() function to block DMA requests from all devices only after the initialization of unity mappings is finished. BZ# 1003697 If the arp_interval and arp_validate bonding options were not enabled on the configured bond device in the correct order, the bond device did not process ARP replies, which led to link failures and changes of the active slave device. A series of patches has been applied to modify an internal bond ARP hook based on the values of arp_validate and arp_interval. Therefore, the ARP hook is registered even if arp_interval is set after arp_validate has already been enabled, and ARP replies are processed as expected. BZ# 920445 The kernel could rarely terminate instead of creating a dump file when a multi-threaded process using FPU aborted. This happened because the kernel did not wait until all threads became inactive and attempted to dump the FPU state of active threads into memory which triggered a BUG_ON() routine. A patch addressing this problem has been applied and the kernel now waits for the threads to become inactive before dumping their FPU state into memory. BZ# 962460 Previously, the Generic Receive Offload (GRO) functionality was not enabled by default for VLAN devices. Consequently, certain network adapters, such as Emulex Virtual Fabric Adapter (VFA) II, that use be2net driver, were dropping packets when VLAN tagging was enabled and the 8021q kernel module loaded. This update applies a patch that enables GRO by default for VLAN devices. BZ# 827548 A race condition between the read_swap_cache_async() and get_swap_page() functions in the Memory management (mm) code could lead to a deadlock situation. The deadlock could occur only on systems that deployed swap partitions on devices supporting block DISCARD and TRIM operations if kernel preemption was disabled (the !CONFIG_PREEMPT parameter). If the read_swap_cache_async() function was given a SWAP_HAS_CACHE entry that did not have a page in the swap cache yet, a DISCARD operation was performed in the scan_swap_map() function. Consequently, completion of an I/O operation was scheduled on the same CPU's working queue the read_swap_cache_async() was running on. This caused the thread in read_swap_cache_async() to loop indefinitely around its "-EEXIST" case, rendering the system unresponsive. The problem has been fixed by adding an explicit cond_resched() call to read_swap_cache_async(), which allows other tasks to run on the affected CPU, and thus avoiding the deadlock. BZ# 987426 An infinite loop bug in the NFSv4 code caused an NFSv4 mount process to hang on a busy loop of the LOOKUP_ROOT operation when attempting to mount an NFSv4 file system and the first iteration on this operation failed. A patch has been applied that allows to exit the LOOKUP_ROOT operation properly and a mount attempt now either succeeds or fails in this situation. BZ# 828936 A bug in the OProfile tool led to a NULL pointer dereference while unloading the OProfile kernel module, which resulted in a kernel panic. The problem was triggered if the kernel was running with the nolapic parameter set and OProfile was configured to use the NMI timer interrupt. The problem has been fixed by correctly setting the NMI timer when initializing OProfile. BZ# 976915 An NFS client previously did not wait for completing of unfinished I/O operations before sending the LOCKU and RELEASE_LOCKOWNER operations to the NFS server in order to release byte range locks on files. Consequently, if the server processed the LOCKU and RELEASE_LOCKOWNER operations before some of the related READ operations, it released all locking states associated with the requested lock owner, and the READs returned the NFS4ERR_BAD_STATEID error code. This resulted in the "Lock reclaim failed!" error messages being generated in the system log and the NFS client had to recover from the error. A series of patches has been applied ensuring that an NFS client waits for all outstanding I/O operations to complete before releasing the locks. BZ# 918239 When the Red Hat Enterprise Linux 6 kernel runs as a virtual machine, it performs boot-time detection of the hypervisor in order to enable hypervisor-specific optimizations. Red Hat Enterprise Linux 6.4 introduces detection and optimization for the Microsoft Hyper-V hypervisor. Previously Hyper-V was detected first, however, because some Xen hypervisors can attempt to emulate Hyper-V, this could lead to a boot failure when that emulation was not exact. A patch has been applied to ensure that the attempt to detect Xen is always done before Hyper-V, resolving this issue. BZ# 962976 If the audit queue is too long, the kernel schedules the kauditd daemon to alleviate the load on the audit queue. Previously, if the current audit process had any pending signals in such a situation, it entered a busy-wait loop for the duration of an audit backlog timeout because the wait_for_auditd() function was called as an interruptible task. This could lead to system lockup in non-preemptive uniprocessor systems. This update fixes the problem by setting wait_for_auditd() as uninterruptible. BZ# 833299 Due to a bug in firmware, systems using the LSI MegaRAID controller failed to initialize this device in the kdump kernel if the "intel_iommu=on" and "iommu=pt"kernel parameters were specified in the first kernel. As a workaround until a firmware fix is available, a patch to the megaraid_sas driver has been applied so if the firmware is not in the ready state upon the first attempt to initialize the controller, the driver resets the controller and retries for firmware transition to the ready state. BZ# 917872 A change in the port auto-selection code allowed sharing ports with no conflicts extending its usage. Consequently, when binding a socket with the SO_REUSEADDR socket option enabled, the bind(2) function could allocate an ephemeral port that was already used. A subsequent connection attempt failed in such a case with the EADDRNOTAVAIL error code. This update applies a patch that modifies the port auto-selection code so that bind(2) now selects a non-conflict port even with the SO_REUSEADDR option enabled. BZ# 994430 A patch to the bridge multicast code introduced a bug allowing reinitialization of an active timer for a multicast group whenever an IPv6 multicast query was received. A patch has been applied to the bridge multicast code so that a bridge multicast timer is no longer reinitialized when it is active. BZ# 916994 A kernel panic could occur during path failover on systems using multiple iSCSI, FC or SRP paths to connect an iSCSI initiator and an iSCSI target. This happened because a race condition in the SCSI driver allowed removing a SCSI device from the system before processing its run queue, which led to a NULL pointer dereference. The SCSI driver has been modified and the race is now avoided by holding a reference to a SCSI device run queue while it is active. BZ# 994382 The kernel's md driver contained multiple bugs, including a use-after-free bug in the raid10 code that could cause a kernel panic. Also a data corruption bug in the raid5 code was discovered. The bug occurred when a hard drive was replaced while a RAID4, RAID5, or RAID6 array contained by the drive was in process of recovery. A series of patches has been applied to fix all bugs that have been discovered. The md driver now contains necessary tests that prevent the mentioned use-after-free and data corruption bugs from occurring. BZ# 840860 The sunrpc code paths that wake up an RPC task are highly optimized for speed so the code avoids using any locking mechanism but requires precise operation ordering. Multiple bugs were found related to operation ordering, which resulted in a kernel crash involving either a BUG_ON() assertion or an incorrect use of a data structure in the sunrpc layer. These problems have been fixed by properly ordering operations related to the RPC_TASK_QUEUED and RPC_TASK_RUNNING bits in the wake-up code paths of the sunrpc layer. BZ# 916735 In the RPC code, when a network socket backed up due to high network traffic, a timer was set causing a retransmission, which in turn could cause even larger amount of network traffic to be generated. To prevent this problem, the RPC code now waits for the socket to empty instead of setting the timer. BZ# 916726 When using parallel NFS (pNFS), a kernel panic could occur when a process was killed while getting the file layout information during the open() system call. A patch has been applied to prevent this problem from occurring in this scenario. BZ# 916722 Previously, when open(2) system calls were processed, the GETATTR routine did not check to see if valid attributes were also returned. As a result, the open() call succeeded with invalid attributes instead of failing in such a case. This update adds the missing check, and the open() call succeeds only when valid attributes are returned. BZ# 916361 The crypto_larval_lookup() function could return a larval, an in-between state when a cryptographic algorithm is being registered, even if it did not create one. This could cause a larval to be terminated twice, and result in a kernel panic. This occurred for example when the NFS service was run in FIPS mode, and attempted to use the MD5 hashing algorithm even though FIPS mode has this algorithm blacklisted. A condition has been added to the crypto_larval_lookup() function to check whether a larval was created before returning it. BZ# 976879 Previously, systems running heavily-loaded NFS servers could experience poor performance of the NFS READDIR operations on large directories that were undergoing concurrent modifications, especially over higher latency connections. This happened because the NFS code performed certain dentry operations inefficiently and revalidated directory attributes too often. This update applies a series of patches that address the problem as follows; needed dentries can be accessed from dcache after the READDIR operation, and directory attributes are revalidated only at the beginning of the directory or if the cached attributes expire. BZ# 976823 The GFS2 did not reserve journal space for a quota change block while growing the size of a file. Consequently, a fatal assertion causing a withdraw of the GFS2 file system could have been triggered when the free blocks were allocated from the secondary bitmap. With this update, GFS2 reserves additional blocks in the journal for the quota change so the file growing transaction can now complete successfully in this situation. BZ# 976535 A patch to the CIFS code caused a regression of a problem where under certain conditions, a mount attempt of a CIFS DFS share fails with a "mount error(6): No such device or address" error message. This happened because the return code variable was not properly reset after a unsuccessful mount attempt. A backported patch has been applied to properly reset the variable and CIFS DFS shares can now be mounted as expected. BZ# 965002 A bug in the PCI driver allowed to use a pointer to the Virtual Function (VF) device entry that was already freed. Consequently, when hot-removing an I/O unit with enabled SR-IOV devices, a kernel panic occurred. This update modifies the PCI driver so a valid pointer to the Physical Function (PF) device entry is used and the kernel no longer panics in this situation. BZ# 915834 A race condition could occur in the uhci-hcd kernel module if the IRQ line was shared with other devices. The race condition allowed the IRQ handler routine to be called before the data structures were fully initialized, which caused the system to become unresponsive. This update applies a patch that fixes the problem by adding a test condition to the IRQ handler routine; if the data structure initialization is still in progress, the handler routine finishes immediately. BZ# 975507 An insufficiently designed calculation in the CPU accelerator could cause an arithmetic overflow in the set_cyc2ns_scale() function if the system uptime exceeded 208 days prior to using kexec to boot into a new kernel. This overflow led to a kernel panic on the systems using the Time Stamp Counter (TSC) clock source, primarily the systems using Intel Xeon E5 processors that do not reset TSC on soft power cycles. A patch has been applied to modify the calculation so that this arithmetic overflow and kernel panic can no longer occur under these circumstances. BZ# 915479 Due to a bug in the NFSv4 nfsd code, a NULL pointer could have been dereferenced when nfsd was looking up a path to the NFSv4 recovery directory for the fsync operation, which resulted in a kernel panic. This update applies a patch that modifies the NFSv4 nfsd code to open a file descriptor for fsync in the NFSv4 recovery directory instead of looking up the path. The kernel no longer panics in this situation. BZ# 858198 Previously, bond and bridge devices did not pass Generic Receive Offload (GRO) information to their slave devices, and bridge devices also did not propagate VLAN information to their ports. As a consequence, in environments with VLAN configured over a bridge or bonding device, performance of the slave devices configured on the bridge and bonding devices was significantly low. A series of patches has been applied that adds the GRO feature for bonding and bridge devices and allows VLANs to be registered with the participating bridge ports. If a slave device supports GRO, its performance is now significantly increased in environments with VLAN configured over a bridge or bonding device. BZ# 975211 Due to a bug in the NFS code, kernel size-192 and size-256 slab caches could leak memory. This could eventually result in an OOM issue when the most of available memory was used by the respective slab cache. A patch has been applied to fix this problem and the respective attributes in the NFS code are now freed properly. BZ# 913704 Previously, the NFS Lock Manager (NLM) did not resend blocking lock requests after NFSv3 server reboot recovery. As a consequence, when an application was running on a NFSv3 mount and requested a blocking lock, the application received an -ENOLCK error. This patch ensures that NLM always resend blocking lock requests after the grace period has expired. BZ# 862758 When counting CPU time, the utime and stime values are scaled based on rtime. Prior to this update, the utime value was multiplied with the rtime value, but the integer multiplication overflow could happen, and the resulting value could be then truncated to 64 bits. As a consequence, utime values visible in the user space were stall even if an application consumed a lot of CPU time. With this update, the multiplication is performed on stime instead of utime. This significantly reduces the chances of an overflow on most workloads because the stime value, unlike the utime value, cannot grow fast. BZ# 913660 In a case of a broken or malicious server, an index node (inode) of an incorrect type could be matched. This led to an NFS client NULL pointer dereference, and, consequently, to a kernel oops. To prevent this problem from occurring in this scenario, a check has been added to verify that the inode type is correct. BZ# 913645 A previously-applied patch introduced a bug in the ipoib_cm_destroy_tx() function, which allowed a CM object to be moved between lists without any supported locking. Under a heavy system load, this could cause the system to crash. With this update, proper locking of the CM objects has been re-introduced to fix the race condition, and the system no longer crashes under a heavy load. BZ# 966853 Previously, when booting a Red Hat Enterprise Linux 6.4 system and the ACPI Static Resource Affinity Table (SRAT) had a hot-pluggable bit enabled, the kernel considered the SRAT table incorrect and NUMA was not configured. This led to a general protection fault and a kernel panic occurring on the system. The problem has been fixed by using an SMBIOS check in the code in order to avoid the SRAT code table consistency checks. NUMA is now configured as expected and the kernel no longer panics in this situation. BZ# 912963 When booting the normal kernel on certain servers, such as HP ProLiant DL980 G7, some interrupts may have been lost which resulted in the system bein unresponsive or rarely even in data loss. This happened because the kernel did not set correct destination mode during the boot; the kernel booted in "logical cluster mode" that is default while this system supported only "x2apic physical mode". This update applies a series of patches addressing the problem. The underlying APIC code has been modified so the x2apic probing code now checks the Fixed ACPI Description Table (FADT) and installs the x2apic "physical" driver as expected. Also, the APIC code has been simplified and the code now uses probe routines to select destination APIC mode and install the correct APIC drivers. BZ# 912867 Previously, the fsync(2) system call incorrectly returned the EIO (Input/Output) error instead of the ENOSPC (No space left on device) error. This was due to incorrect error handling in the page cache. This problem has been fixed and the correct error value is now returned. BZ# 912842 Previously, an NFS RPC task could enter a deadlock and become unresponsive if it was waiting for an NFSv4 state serialization lock to become available and the session slot was held by the NFSv4 server. This update fixes this problem along with the possible race condition in the pNFS return-on-close code. The NFSv4 client has also been modified to not accepting delegated OPEN operations if a delegation recall is in effect. The client now also reports NFSv4 servers that try to return a delegation when the client is using the CLAIM_DELEGATE_CUR open mode. BZ# 912662 Due to the way the CPU time was calculated, an integer multiplication overflow bug could occur after several days of running CPU bound processes that were using hundreds of kernel threads. As a consequence, the kernel stopped updating the CPU time and provided an incorrect CPU time instead. This could confuse users and lead to various application problems. This update applies a patch fixing this problem by decreasing the precision of calculations when the stime and rtime values become too large. Also, a bug allowing stime values to be sometimes erroneously calculated as utime values has been fixed. BZ# 967095 An NFS server could terminate unexpectedly due to a NULL pointer dereference caused by a rare race condition in the lockd daemon. An applied patch fixes this problem by protecting the relevant code with spin locks, and thus avoiding the race in lockd. BZ# 911359 Virtual LAN (VLAN) support of the eHEA ethernet adapter did not work as expected. A "device ethX has buggy VLAN hw accel" message could have been reported when running the "dmesg" command. This was because an upstream backport patch removed the vlan_rx_register() function. This update adds the function back, and eHEA VLAN support works as expected. This update also addresses a possible kernel panic, which could occur due to a NULL pointer dereference when processing received VLAN packets. The patch adds a test condition verifying whether a VLAN group is set by the network stack, which prevents a possible NULL pointer to be dereferenced, and the kernel no longer crashes in this situation. BZ# 910597 The kernel's implementation of RTAS (RunTime Abstraction Services) previously allowed the stop_topology_update() function to be called from an interrupt context during live partition migration on PowerPC and IBM System p machines. As a consequence, the system became unresponsive. This update fixes the problem by calling stop_topology_update() earlier in the migration process, and the system no longer hangs in this situation. BZ# 875753 Truncating files on a GFS2 file system could fail with an "unable to handle kernel NULL pointer dereference" error. This was because of a missing reservation structure that caused the truncate code to reference an incorrect pointer. To prevent this, a patch has been applied to allocate a block reservation structure before truncating a file. BZ# 909464 Previously, race conditions could sometimes occur in interrupt handling on the Emulex BladeEngine 2 (BE2) controllers, causing the network adapter to become unresponsive. This update provides a series of patches for the be2net driver, which prevents the race from occurring. The network cards using BE2 chipsets no longer hang due to incorrectly handled interrupt events. BZ# 908990 Previously, power-limit notification interrupts were enabled by default on the system. This could lead to degradation of system performance or even render the system unusable on certain platforms, such as Dell PowerEdge servers. A patch has been applied to disable power-limit notification interrupts by default and a new kernel command line parameter "int_pln_enable" has been added to allow users observing these events using the existing system counters. Power-limit notification messages are also no longer displayed on the console. The affected platforms no longer suffer from degraded system performance due to this problem. BZ# 876778 A change in the ipmi_si driver handling caused an extensively long delay while booting Red Hat Enterprise Linux 6.4 on SIG UV platforms. The driver was loaded as a kernel module on versions of Red Hat Enterprise Linux 6 while it is now built within the kernel. However, SIG UV does not use, and thus does not support the ipmi_si driver. A patch has been applied and the kernel now does not initialize the ipmi_si driver when booting on SIG UV. BZ# 908851 Previously, the queue limits were not being retained as they should have been if a device did not contain any data or if a multipath device temporarily lost all its paths. This problem has been fixed by avoiding a call to the dm_calculate_queue_limits() function. BZ# 908751 When adding a virtual PCI device, such as virtio disk, virtio net, e1000 or rtl8139, to a KVM guest, the kacpid thread reprograms the hot plug parameters of all devices on the PCI bus to which the new device is being added. When reprogramming the hot plug parameters of a VGA or QXL graphics device, the graphics device emulation requests flushing of the guest's shadow page tables. Previously, if the guest had a huge and complex set of shadow page tables, the flushing operation took a significant amount of time and the guest could appear to be unresponsive for several minutes. This resulted in exceeding the threshold of the "soft lockup" watchdog and the "BUG: soft lockup" events were logged by both, the guest and host kernel. This update applies a series of patches that deal with this problem. The KVM's Memory Management Unit (MMU) now avoids creating multiple page table roots in connection with processors that support Extended Page Tables (EPT). This prevents the guest's shadow page tables from becoming too complex on machines with EPT support. MMU now also flushes only large memory mappings, which alleviates the situation on machines where the processor does not support EPT. Additionally, a free memory accounting race that could prevent KVM MMU from freeing memory pages has been fixed. BZ# 908608 Certain CPUs contain on-chip virtual-machine control structure (VMCS) caches that are used to keep active VMCSs managed by the KVM module. These VMCSs contain runtime information of the guest machines operated by KVM. These CPUs require support of the VMCLEAR instruction that allows flushing the cache's content into memory. The kernel previously did not use the VMCLEAR instruction in Kdump. As a consequence, when dumping a core of the QEMU KVM host, the respective CPUs did not flush VMCSs to the memory and the guests' runtime information was not included in the core dump. This problem has been addressed by a series of patches that implement support of using the VMCLEAR instruction in Kdump. The kernel is now performs the VMCLEAR operation in Kdump if it is required by a CPU so the vmcore file of the QEMU KVM host contains all VMCSs information as expected. BZ# 908524 When pNFS (parallel NFS) code was in use, a file locking process could enter a deadlock while trying to recover form a server reboot. This update introduces a new locking mechanism that avoids the deadlock situation in this scenario. BZ# 878708 Sometimes, the irqbalance tool could not get the CPU NUMA node information because of missing symlinks for CPU devices in sysfs. This update adds the NUMA node symlinks for CPU devices in sysfs, which is also useful when using irqbalance to build a CPU topology. BZ# 908158 The virtual file system (VFS) code had a race condition between the unlink and link system calls that allowed creating hard links to deleted (unlinked) files. This could, under certain circumstances, cause inode corruption that eventually resulted in a file system shutdown. The problem was observed in Red Hat Storage during rsync operations on replicated Gluster volumes that resulted in an XFS shutdown. A testing condition has been added to the VFS code, preventing hard links to deleted files from being created. BZ# 908093 When an inconsistency is detected in a GFS2 file system after an I/O operation, the kernel performs the withdraw operation on the local node. However, the kernel previously did not wait for an acknowledgement from the GFS control daemon (gfs_controld) before proceeding with the withdraw operation. Therefore, if a failure isolating the GFS2 file system from a data storage occurred, the kernel was not aware of this problem and an I/O operation to the shared block device may have been performed after the withdraw operation was logged as successful. This could lead to corruption of the file system or prevent the node from journal recovery. This patch modifies the GFS2 code so the withdraw operation no longer proceeds without the acknowledgement from gfs_controld, and the GFS2 file system can no longer become corrupted after performing the withdraw operation. BZ# 907844 If a logical volume was created on devices with thin provisioning enabled, the mkfs.ext4 command took a long time to complete, and the following message was recorded in the system log: This was caused by discard request merging that was not completely functional in the block and SCSI layers. This functionality has been temporarily disabled to prevent such problems from occurring. BZ# 907512 A patch that modified dcache and autofs code caused a regression. Due to this regression, unmounting a large number of expired automounts on a system under heavy NFS load caused soft lockups, rendering the system unresponsive. If a "soft lockup" watchdog was configured, the machine rebooted. To fix the regression, the erroneous patch has been reverted and the system now handle the aforementioned scenario properly without any soft lockups. BZ# 907227 Previously, when using parallel network file system (pNFS) and data was written to the appropriate storage device, the LAYOUTCOMMIT requests being sent to the metadata server could fail internally. The metadata server was not provided with the modified layout based on the written data, and these changes were not visible to the NFS client. This happened because the encoding functions for the LAYOUTCOMMIT and LAYOUTRETURN operations were defined as void, and returned thus an arbitrary status. This update corrects these encoding functions to return 0 on success as expected. The changes on the storage device are now propagated to the metadata server and can be observed as expected. BZ# 883905 When the Active Item List (AIL) becomes empty, the xfsaild daemon is moved to a task sleep state that depends on the timeout value returned by the xfsaild_push() function. The latest changes modified xfsaild_push() to return a 10-ms value when the AIL is empty, which sets xfsaild into the uninterruptible sleep state (D state) and artificially increased system load average. This update applies a patch that fixes this problem by setting the timeout value to the allowed maximum, 50 ms. This moves xfsaild to the interruptible sleep state (S state), avoiding the impact on load average. BZ# 905126 Previously, init scripts were unable to set the master interface MAC address properly because it was overwritten by the first slave MAC address. To avoid this problem, this update re-introduces the check for an unassigned MAC address before adopting the first slaves as its own. BZ# 884442 Due to a bug in the be2net driver, events in the RX, TX, and MCC queues were not acknowledged before closing the respective queue. This could cause unpredictable behavior when creating RX rings during the subsequent queue opening. This update applies a patch that corrects this problem and events are now acknowledged as expected in this scenario. BZ# 904726 Previously, the mlx4 driver set the number of requested MSI-X vectors to 2 under multi-function mode on mlx4 cards. However, the default setting of the mlx4 firmware allows for a higher number of requested MSI-X vectors (4 of them with the current firmware). This update modifies the mlx4 driver so that it uses these default firmware settings, which improves performance of mlx4 cards. BZ# 904025 Reading a large number of files from a pNFS (parallel NFS) mount and canceling the running operation by pressing Ctrl+C caused a general protection fault in the XDR code, which could manifest itself as a kernel oops with an "unable to handle kernel paging request" message. This happened because decoding of the LAYOUTGET operation is done by a worker thread and the caller waits for the worker thread to complete. When the reading operation was canceled, the caller stopped waiting and freed the pages. So the pages no longer existed at the time the worker thread called the relevant function in the XDR code. The cleanup process of these pages has been moved to a different place in the code, which prevents the kernel oops from happening in this scenario. BZ# 903644 A patch to the mlx4 driver enabled an internal loopback to allow communication between functions on the same host. However, this change introduced a regression that caused virtual switch (vSwitch) bridge devices using Mellanox Ethernet adapter as the uplink to become inoperative in native (non-SRIOV) mode under certain circumstances. To fix this problem, the destination MAC address is written to Tx descriptors of transmitted packets only in SRIOV or eSwitch mode, or during the device self-test. Uplink traffic works as expected in the described setup. BZ# 887006 The Intel 5520 and 5500 chipsets do not properly handle remapping of MSI and MSI-X interrupts. If the interrupt remapping feature is enabled on the system with such a chipset, various problems and service disruption could occur (for example, a NIC could stop receiving frames), and the "kernel: do_IRQ: 7.71 No irq handler for vector (irq -1)" error message appears in the system logs. As a workaround to this problem, it has been recommended to disable the interrupt remapping feature in the BIOS on such systems, and many vendors have updated their BIOS to disable interrupt remapping by default. However, the problem is still being reported by users without proper BIOS level with this feature properly turned off. Therefore, this update modifies the kernel to check if the interrupt remapping feature is enabled on these systems and to provide users with a warning message advising them on turning off the feature and updating the BIOS. BZ# 887045 When booting Red Hat Enterprise Linux 6 system that utilized a large number of CPUs (more than 512), the system could fail to boot or could appear to be unresponsive after initialization. This happened because the CPU frequency driver used a regular spin lock (cpufreq_driver_lock) to serialize frequency transitions, and this lock could, under certain circumstances, become a source of heavy contention during the system initialization and operation. A patch has been applied to convert cpufreq_driver_lock into a read-write lock, which resolves the contention problem. All Red Hat Enterprise Linux 6 systems now boot and operate as expected. BZ# 903220 A patch to the kernel introduced a bug by assigning a different value to the IFLA_EXT_MASK Netlink attribute than found in the upstream kernels. This could have caused various problems; for example, a binary compiled against upstream headers could have failed or behaved unexpectedly on Red Hat Enterprise Linux 6.4 and later kernels. This update realigns IFLA_EXT_MASK in the enumeration correctly by synchronizing the IFLA_* enumeration with the upstream. This ensures that binaries compiled against Red Hat Enterprise Linux 6.4 kernel headers will function as expected. Backwards compatibility is guaranteed. BZ# 887868 Due to a bug in the SCTP code, a NULL pointer dereference could occur when freeing an SCTP association that was hashed, resulting in a kernel panic. A patch addresses this problem by trying to unhash SCTP associations before freeing them and the problem no longer occurs. BZ# 888417 Previously, a kernel panic could occur on machines using the SCSI sd driver with Data Integrity Field (DIF) type 2 protection. This was because the scsi_register_driver() function registered the prep_fn()function that might have needed to use the sd_cdp_pool variable for the DIF functionality. However, the variable had not yet been initialized at this point. The underlying code has been updated so that the driver is registered last, which prevents a kernel panic from occurring in this scenario. BZ# 901747 The bnx2x driver could have previously reported an occasional MDC/MDIO timeout error along with the loss of the link connection. This could happen in environments using an older boot code because the MDIO clock was set in the beginning of each boot code sequence instead of per CL45 command. To avoid this problem, the bnx2x driver now sets the MDIO clock per CL45 command. Additionally, the MDIO clock is now implemented per EMAC register instead of per port number, which prevents ports from using different EMAC addresses for different PHY accesses. Also, boot code or Management Firmware (MFW) upgrade is required to prevent the boot code (firmware) from taking over link ownership if the driver's pulse is delayed. The BCM57711 card requires boot code version 6.2.24 or later, and the BCM57712/578xx cards require MFW version 7.4.22 or later. BZ# 990806 When the Audit subsystem was under heavy load, it could loop infinitely in the audit_log_start() function instead of failing over to the error recovery code. This would cause soft lockups in the kernel. With this update, the timeout condition in the audit_log_start() function has been modified to properly fail over when necessary. BZ# 901701 A kernel update broke queue pair (qp) hash list deletion in the qp_remove() function. This could cause a general protection fault in the InfiniBand stack or QLogic InfiniBand driver. A patch has been applied to restore the former behavior so the general protection fault no longer occurs. BZ# 896233 Under rare circumstances, if a TCP retransmission was multiple times partially acknowledged and collapsed, the used Socked Buffer (SKB) could become corrupted due to an overflow caused by the transmission headroom. This resulted in a kernel panic. The problem was observed rarely when using an IP-over-InfiniBand (IPoIB) connection. This update applies a patch that verifies whether a transmission headroom exceeded the maximum size of the used SKB, and if so, the headroom is reallocated. It was also discovered that a TCP stack could retransmit misaligned SKBs if a malicious peer acknowledged sub MSS frame and output interface did not have a sequence generator (SG) enabled. This update introduces a new function that allows for copying of a SKB with a new head so the SKB remains aligned in this situation. BZ# 896020 When using transparent proxy (TProxy) over IPv6, the kernel previously created neighbor entries for local interfaces and peers that were not reachable directly. This update corrects this problem and the kernel no longer creates invalid neighbor entries. BZ# 894683 A change in the port auto-selection code allowed sharing ports with no conflicts extending its usage. Consequently, when binding a socket with the SO_REUSEADDR socket option enabled, the bind(2) function could allocate an ephemeral port that was already used. A subsequent connection attempt failed in such a case with the EADDRNOTAVAIL error code. This update applies a patch that modifies the port auto-selection code so that bind(2) now selects a non-conflict port even with the SO_REUSEADDR option enabled. BZ# 893584 Timeouts could occur on an NFS client with heavy read workloads; for example when using rsync and ldconfig. Both client-side and server-side causes were found for the problem. On the client side, problems that could prevent the client reconnecting lost TCP connections have been fixed. On the server side, TCP memory pressure on the server forced the send buffer size to be lower than the size required to send a single Remote Procedure Call (RPC), which consequently caused the server to be unable to reply to the client. Code fixes are still being considered. To work around the problem, increase the minimum TCP buffer sizes, for example using: BZ# 895336 Broadcom 5719 NIC could previously sometimes drop received jumbo frame packets due to cyclic redundancy check (CRC) errors. This update modifies the tg3 driver so that CRC errors no longer occur and Broadcom 5719 NICs process jumbo frame packets as expected. BZ# 896224 When running a high thread workload of small-sized files on an XFS file system, sometimes, the system could become unresponsive or a kernel panic could occur. This occurred because the xfsaild daemon had a subtle code path that led to lock recursion on the xfsaild lock when a buffer in the AIL was already locked and an attempt was made to force the log to unlock it. This patch removes the dangerous code path and queues the log force to be invoked from a safe locking context with respect to xfsaild. This patch also fixes the race condition between buffer locking and buffer pinned state that exposed the original problem by rechecking the state of the buffer after a lock failure. The system no longer hangs and kernel no longer panics in this scenario. BZ# 902965 The NFSv4.1 client could stop responding while recovering from a server reboot on an NFSv4.1 or pNFS mount with delegations disabled. This could happen due to insufficient locking in the NFS code and several related bugs in the NFS and RPC scheduler code which could trigger a deadlock situation. This update applies a series of patches which prevent possible deadlock situations from occurring. The NFSv4.1 client now recovers and continue with workload as expected in the described situation. BZ# 1010840 The default sfc driver on Red Hat Enterprise Linux 6 allowed toggling the Large Receive Offset (LRO) flag on and off on a network device regardless of whether LRO was supported by the device or not. Therefore, when the LRO flag was enabled on devices without LRO support, the action had no effect and could confuse users. A patch to the sfc driver has been applied so that the sfc driver properly validates whether LRO is supported by the device. If the device does not support LRO, sfc disables the LRO flag so that users can no longer toggle it for that device. BZ# 886867 During device discovery, the system creates a temporary SCSI device with the LUN ID 0 if the LUN 0 is not mapped on the system. Previously, this led to a NULL pointer dereference because inquiry data was not allocated for the temporary LUN 0 device, which resulted in a kernel panic. This update adds a NULL pointer test in the underlying SCSI code, and the kernel no longer panics in this scenario. BZ# 886420 When a network interface (NIC) is running in promiscuous (PROMISC) mode, the NIC may receive and process VLAN tagged frames even though no VLAN is attached to the NIC. However, some network drivers, such as bnx2, igb, tg3, and e1000e did not handle processing of packets with VLAN tagged frames in PROMISC mode correctly if the frames had no VLAN group assigned. The drivers processed the packets with incorrect routines and various problems could occur; for example, a DHCPv6 server connected to a VLAN could assign an IPv6 address from the VLAN pool to a NIC with no VLAN interface. To handle the VLAN tagged frames without a VLAN group properly, the frames have to be processed by the VLAN code so the aforementioned drivers have been modified to restrain from performing a NULL value test of the packet's VLAN group field when the NIC is in PROMISC mode. This update also includes a patch fixing a bug where the bnx2x driver did not strip a VLAN header from the frame if no VLAN was configured on the NIC, and another patch that implements some register changes in order to enable receiving and transmitting of VLAN packets on a NIC even if no VLAN is registered with the card. BZ# 988460 When a slave device started up, the current_arp_slave parameter was unset but the active flags on the slave were not marked inactive. Consequently, more than one slave device with active flags in active-backup mode could be present on the system. A patch has been applied to fix this problem by marking the active flags inactive for a slave device before the current_arp_slave parameter is unset. BZ# 883575 Due to a bug in descriptor handling, the ioat driver did not correctly process pending descriptors on systems with the Intel Xeon Processor E5 family. Consequently, the CPU was utilized excessively on these systems. A patch has been applied to the ioat driver so the driver now determines pending descriptors correctly and CPU usage is normal again for the described processor family. BZ# 905561 A change in the bridge multicast code allowed sending general multicast queries in order to achieve faster convergence on startup. To prevent interference with multicast routers, send packets contained a zero source IP address. However, these packets interfered with certain multicast-aware switches, which resulted in the system being flooded with the IGMP membership queries with zero source IP address. A series of patches addresses this problem by disabling multicast queries by default and implementing multicast querier that allows to toggle up sending of general multicast queries if needed. BZ# 882413 A bug was causing bad block detection to try to isolate which blocks were bad in a device that had suffered a complete failure - even when bad block tracking was not turned on. This was causing very large delays in returning I/O errors when the entire set of RAID devices was lost to failure. The large delays caused problems during disaster recovery scenarios. The bad block tracking code is now properly disabled and errors return in a timely fashion when enough devices fail in a RAID array to exceed its redundancy. BZ# 876600 Previously, running commands such as "ls", "find" or "move" on a MultiVersion File System (MVFS) could cause a kernel panic. This happened because the d_validate() function, which is used for dentry validation, called the kmem_ptr_validate() function to validate a pointer to a parent dentry. The pointer could have been freed anytime so the kmem_ptr_validate() function could not guarantee the pointer to be dereferenced, which could lead to a NULL pointer derefence. This update modifies d_validate() to verify the parent-child relationship by traversing the parent dentry's list of child dentries, which solves this problem. The kernel no longer panics in the described scenario. BZ# 1008705 The sfc driver exposes on-board flash partitions using the MTD subsystem and it must expose up to 9 flash partitions per board. However, the MTD subsystem in Red Hat Enterprise Linux 6 has a static limit of 32 flash partitions. As a consequence, the Solarflare tools cannot operate on all boards if more than 3 boards are installed, preventing firmware on some boards from being updated or queried for a version number. With this update, a new EFX_MCDI_REQUEST sub-command has been added to the driver-private SIOCEFX ioctl, which allows bypassing the MTD layer and sending requests directly to the controller's firmware. The Solarflare tools can now be used and the firmware on all installed devices can be updated as expected in this scenario. BZ# 871795 Previously, the VLAN code incorrectly cleared the timestamping interrupt bit for network devices using the igb driver. Consequently, timestamping failed on the igb network devices with Precision Time Protocol (PTP) support. This update modifies the igb driver to preserve the interrupt bit if interrupts are disabled. BZ# 869736 When using more than 4 GB of RAM with an AMD processor, reserved regions and memory holes (E820 regions) can also be placed above the 4 GB range. For example, on configurations with more than 1 TB of RAM, AMD processors reserve the 1012 GB - 1024 GB range for the Hyper Transport (HT) feature. However, the Linux kernel does not correctly handle E820 regions that are located above the 4 GB range. Therefore, when installing Red Hat Enterprise Linux on a machine with an AMD processor and 1 TB of RAM, a kernel panic occurred and the installation failed. This update modifies the kernel to exclude E820 regions located above the 4 GB range from direct mapping. The kernel also no longer maps the whole memory on boot but only finds memory ranges that are necessary to be mapped. The system can now be successfully installed on the above-described configuration. BZ# 867689 The kernel interface to ACPI had implemented error messaging incorrectly. The following error message was displayed when the system had a valid ACPI Error Record Serialization Table (ERST) and the pstore.backend kernel parameter had been used to disable use of ERST by the pstore interface: However, the same message was also used to indicate errors precluding registration. A series of patches modifies the relevant ACPI code so that ACPI now properly distinguish between different cases and accordingly prints unique and informative messages. BZ# 965132 When setting up a bonding device, a certain flag was used to distinguish between TLB and ALB modes. However, usage of this flag in ALB mode allowed enslaving NICs before the bond was activated. This resulted in enslaved NICs not having unique MAC addresses as required, and consequent loss of "reply" packets sent to the slaves. This patch modifies the function responsible for the setup of the slave's MAC address so the flag is no longer needed to discriminate ALB mode from TLB and the flag was removed. The described problem no longer occur in this situation. BZ# 920752 A bug in the do_filp_open() function caused it to exit early if any write access was requested on a read-only file system. This prevented the opening of device nodes on a read-only file system. With this update, the do_filp_open() has been fixed to no longer exit if a write request is made on a read-only file system. BZ# 981741 A dentry leak occurred in the FUSE code when, after a negative lookup, a negative dentry was neither dropped nor was the reference counter of the dentry decremented. This triggered a BUG() macro when unmounting a FUSE subtree containing the dentry, resulting in a kernel panic. A series of patches related to this problem has been applied to the FUSE code and negative dentries are now properly dropped so that triggering the BUG() macro is now avoided. BZ# 924804 This update reverts two previously-included qla2xxx patches. These patches changed the fibre channel target port discovery procedure, which resulted in some ports not being discovered in some corner cases. Reverting these two patches fixes the discovery issues. BZ# 957821 Due a bug in the memory mapping code, the fadvise64() system call sometimes did not flush all the relevant pages of the given file from cache memory. A patch addresses this problem by adding a test condition that verifies whether all the requested pages were flushed and retries with an attempt to empty the LRU pagevecs in the case of test failure. BZ# 957231 The xen-netback and xen-netfront drivers cannot handle packets with size greater than 64 KB including headers. The xen-netfront driver previously did not account for any headers when determining the maximum size of GSO (Generic Segmentation Offload). Consequently, Xen DomU guest operations could have caused a network DoS issue on DomU when sending packets larger than 64 KB. This update adds a patch that corrects calculation of the GSO maximum size and the problem no longer occurs. BZ# 848085 A possible race in the tty layer could result in a kernel panic after triggering the BUG_ON() macro. As a workaround, the BUG_ON() macro has been replaced by the WARN_ON() macro, which allows for avoiding the kernel panic and investigating the race problem further. BZ# 980876 A bug in the network bridge code allowed an internal function to call code which was not atomic-safe while holding a spin lock. Consequently, a "BUG: scheduling while atomic" error has been triggered and a call trace logged by the kernel. This update applies a patch that orders the function properly so the function no longer holds a spin lock while calling code which is not atomic-safe. The aforementioned error with a call trace no longer occurs in this case. BZ# 916806 An NFSv4 client could previously enter a deadlock situation with the state recovery thread during state recovery after a reboot of an NFSv4 server. This happened because the client did not release the NFSv4 sequence ID of an OPEN operation that was requested before the reboot. This problem is resolved by releasing the sequence ID before the client starts waiting for the server to recover. BZ# 859562 A bug in the device-mapper RAID kernel module was preventing the "sync" directive from being honored. The result was that users were unable to force their RAID arrays to undergo a complete resync if desired. This has been fixed and users can use 'lvchange --resync my_vg/my_raid_lv' to force a complete resynchronization on their LVM RAID arrays. Enhancements BZ# 823012 This update provides simplified performance analysis for software on Linux on System z by using the Linux perf tool to access the hardware performance counters. BZ# 829506 The fnic driver previously allowed I/O requests with the number of SGL descriptors greater than is supported by Cisco UCS Palo adapters. Consequently, the adapter returned any I/O request with more than 256 SGL descriptors with an error indicating invalid SGLs. A patch has been applied to limit the maximum number of supported SGLs in the fnic driver to 256 and the problem no longer occurs. BZ# 840454 To transmit data, for example, trace data, from guests to hosts, a low-overhead communication channel was required. Support for the splice() call has been added to the virtio_console module in the Linux kernel. This enables sending guest kernel data to the host without extra copies of the data being made inside the guest. Low-overhead communication between the guest Linux kernel and host userspace is performed via virtio-serial. BZ# 888903 A new MTIOCTOP operation, MTWEOFI, has been added to the SCSI tape driver, which allows writing of "filemarks" with the "immediate" bit. This allows a SCSI tape drive to preserve the content of its buffer, enabling the file operation to start immediately. This can significantly increase write performance for applications that have to write multiple small files to the tape while it also reduces tape weariness. BZ# 913650 Previously, a user needed to unmount, deactivate their RAID LV, and re-activate it in order to restore a transiently failed device in their array. Now it is possible to restore such devices without unmounting by simply running 'lvchange --refresh'. BZ# 923212 Open vSwitch (OVS) is an open-source, multi-layer software switch designed to be used as a virtual switch in virtualized server environments. Starting with Red Hat Enterprise Linux 6.4, the Open vSwitch kernel module is included as an enabler for Red Hat Enterprise Linux OpenStack Platform. Open vSwitch is only supported in conjunction with Red Hat products containing the accompanying user-space packages. Without theses packages, Open vSwitch will not function and cannot be used with other Red Hat Enterprise Linux variants. BZ# 928983 The RHEL6.5 bfa driver changes behavior of the dev_loss_tmo value such that it can only be set to a value greater than the bfa driver specific path_tov value. The minimum default value that the dev_loss_tmo can be set to is 31 seconds. Attempting to set the dev_loss_tmo value lower than 31 seconds without lowering the default bfa path_tov value will not succeed. BZ# 929257 Error recovery support has been added to the flash device driver, which allows hardware service upgrades without negative impact on I/O of flash devices. BZ# 929259 The crypto adapter resiliency feature has been added. This feature provides System z typical RAS for cryptographic adapters through comprehensive failure recovery. For example, this feature handles unexpected failures or changes caused by Linux guest relocation, suspend and resume activities or configuration changes. BZ# 929262 The "fuzzy live dump" feature has been added. With this feature kernel dumps from running Linux systems can be created, to allow problem analysis without taking down systems. Because the Linux system continues running while the dump is written, and kernel data structures are changing during the dump process, the resulting dump contains inconsistencies. BZ# 929264 , BZ# 929264 The kernel now provides an offline interface for DASD devices. Instead of setting a DASD device offline and returning all outstanding I/O requests as failed, with this interface you can set a DASD device offline and write all outstanding data to the device before setting the device offline. BZ# 929274 The kernel now provides the Physical Channel ID (PCHID) mapping that enables hardware detection with a machine-wide unique identifier. BZ# 929275 The kernel now provides VEPA mode support. VEPA mode routes traffic between virtual machines on the same mainframe through an external switch. The switch then becomes a single point of control for security, filtering, and management. BZ# 755486 , BZ# 755486 Message Transfer Part Level 3 User Adaptation Layer (M3UA) is a protocol defined by the IETF standard for transporting MTP Level 3 user part signaling messages over IP using Stream Control Transmission Protocol (SCTP) instead of telephony equipment like ISDN and PSTN. With this update, M3AU measurement counters have been included for SCTP. BZ# 818344 Support for future Intel 2D and 3D graphics has been added to allow systems using future Intel processors to be certified through the Red Hat Hardware Certification program. BZ# 826061 In certain storage configurations (for example, configurations with many LUNs), the SCSI error handling code can spend a large amount of time issuing commands such as TEST UNIT READY to unresponsive storage devices. A new sysfs parameter, eh_timeout, has been added to the SCSI device object, which allows configuration of the timeout value for TEST UNIT READY and REQUEST SENSE commands used by the SCSI error handling code. This decreases the amount of time spent checking these unresponsive devices. The default value of eh_timeout is 10 seconds, which was the timeout value used prior to adding this functionality. BZ# 839470 , BZ# 839470 With this update, 12Gbps LSI SAS devices are now supported in Red Hat Enterprise Linux 6. BZ# 859446 Red Hat Enterprise Linux 6.5 introduces the Orlov block allocator that provides better locality for files which are truly related to each other and likely to be accessed together. In addition, when resource groups are highly contended, a different group is used to maximize performance. BZ# 869622 The mdadm tool now supports the TRIM commands for RAID0, RAID1, RAID10 and RAID5. BZ# 880142 Network namespace support for OpenStack has been added. Network namespaces (netns) is a lightweight container-based virtualization technology. A virtual network stack can be associated with a process group. Each namespace has its own loopback device and process space. Virtual or real devices can be added to each network namespace, and the user can assign IP addresses to these devices and use them as a network node. BZ# 908606 Support for dynamic hardware partitioning and system board slot recognition has been added. The dynamic hardware partitioning and system board slot recognition features alert high-level system middleware or applications for reconfiguration and allow users to grow the system to support additional workloads without reboot. BZ# 914771 , BZ# 920155 , BZ# 914797 , BZ# 914829 , BZ# 914832 , BZ# 914835 An implementation of the Precision Time Protocol (PTP) according to IEEE standard 1588 for Linux was introduced as a Technology Preview in Red Hat Enterprise Linux 6.4. The PTP infrastructure, both kernel and user space, is now fully supported in Red Hat Enterprise Linux 6.5. Network driver time stamping support now also includes the following drivers: bnx2x, tg3, e1000e, igb, ixgbe, and sfc. BZ# 862340 The Solarflare driver (sfc) has been updated to add PTP support as a Technology Preview. BZ# 918316 In Red Hat Enterprise Linux 6.5, users can change the cryptography hash function from MD5 to SHA1 for Stream Control Transmission Protocol (SCTP) connections. BZ# 922129 The pm8001/pm80xx driver adds support for PMC-Sierra Adaptec Series 6H and 7H SAS/SATA HBA cards as well as PMC Sierra 8081, 8088, and 8089 chip based SAS/SATA controllers. BZ# 922299 VMware Platform Drivers Updates The VMware network para-virtualized driver has been updated to the latest upstream version. BZ# 922941 The Error-correcting code (ECC) memory has been enabled for future generation of AMD processors. This feature provides the ability to check for performance and errors by accessing ECC memory related counters and status bits. BZ# 922965 Device support is enabled in the operating system for future Intel System-on-Chip (SOC) processors. These include Dual Atom processors, memory controller, SATA, Universal Asynchronous Receiver/Transmitter, System Management Bus (SMBUS), USB and Intel Legacy Block (ILB - lpc, timers, SMBUS (i2c_801 module)). BZ# 947944 Kernel Shared Memory (KSM) has been enhanced to consider non-uniform memory access (NUMA) when coalescing pages, which improves performance of the applications on the system. Also, additional page types have been included to increase the density of applications available for Red Hat OpenShift. BZ# 949805 FUSE (Filesystem in User Space) is a framework that allows for development of file systems purely in the user space without requiring modifications to the kernel. Red Hat Enterprise Linux 6.5 delivers performance enhancements for user space file systems that use FUSE, for example, GlusterFS (Red Hat Storage). BZ# 864597 The default TCP stack buffers are too large for high bandwidth applications that fully utilize the Ethernet link. This could result in a situation where connection bandwidth could not be fully utilized and could be distributed unequally if the link was shared by multiple client devices. To resolve this problem, a new feature, TCP Small Queues (TSQ), has been introduced to the TCP code. The TSQ feature reduces a number of TCP packets in xmit queues, TCP round-trip time (RTT), and the congestion window (CWND) size. It also mitigates an impact of a possible bufferbloat problem. This change also includes a patch that resolves a performance problem on mlx4 devices caused by setting the default value of the Tx coalescing too high. All Red Hat Enterprise Linux 6 users are advised to install these updated packages, which correct these issues, and fix the bugs and add the enhancements noted in the Red Hat Enterprise Linux 6.5 Release Notes and Technical Notes. The system must be rebooted for this update to take effect. | [
"kernel: blk: request botched",
"echo \"1048576 1048576 4194304\" >/proc/sys/net/ipv4/tcp_wmem",
"ERST: Could not register with persistent store"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/kernel |
Chapter 12. Identity (keystone) Parameters | Chapter 12. Identity (keystone) Parameters Parameter Description AdminEmail The email for the OpenStack Identity (keystone) admin account. The default value is [email protected] . AdminPassword The password for the OpenStack Identity (keystone) admin account. AdminToken The OpenStack Identity (keystone) secret and database password. KeystoneAuthMethods A list of methods used for authentication. KeystoneChangePasswordUponFirstUse Enabling this option requires users to change their password when the user is created, or upon administrative reset. KeystoneCorsAllowedOrigin Indicate whether this resource may be shared with the domain received in the request "origin" header. KeystoneCredential0 The first OpenStack Identity (keystone) credential key. Must be a valid key. KeystoneCredential1 The second OpenStack Identity (keystone) credential key. Must be a valid key. KeystoneDisableUserAccountDaysInactive The maximum number of days a user can go without authenticating before being considered "inactive" and automatically disabled (locked). KeystoneEnableMember Create the member role, useful for undercloud deployment. The default value is False . KeystoneFederationEnable Enable support for federated authentication. The default value is False . KeystoneFernetKeys Mapping containing OpenStack Identity (keystone) fernet keys and their paths. KeystoneFernetMaxActiveKeys The maximum active keys in the OpenStack Identity (keystone) fernet key repository. The default value is 5 . KeystoneLDAPBackendConfigs Hash containing the configurations for the LDAP backends configured in keystone. KeystoneLDAPDomainEnable Trigger to call ldap_backend puppet keystone define. The default value is False . KeystoneLockoutDuration The number of seconds a user account will be locked when the maximum number of failed authentication attempts (as specified by KeystoneLockoutFailureAttempts) is exceeded. KeystoneLockoutFailureAttempts The maximum number of times that a user can fail to authenticate before the user account is locked for the number of seconds specified by KeystoneLockoutDuration. KeystoneMinimumPasswordAge The number of days that a password must be used before the user can change it. This prevents users from changing their passwords immediately in order to wipe out their password history and reuse an old password. KeystoneNotificationFormat The OpenStack Identity (keystone) notification format. The default value is basic . KeystoneNotificationTopics OpenStack Identity (keystone) notification topics to enable. KeystoneOpenIdcClientId The client ID to use when handshaking with your OpenID Connect provider. KeystoneOpenIdcClientSecret The client secret to use when handshaking with your OpenID Connect provider. KeystoneOpenIdcCryptoPassphrase Passphrase to use when encrypting data for OpenID Connect handshake. The default value is openstack . KeystoneOpenIdcEnable Enable support for OpenIDC federation. The default value is False . KeystoneOpenIdcEnableOAuth Enable OAuth 2.0 integration. The default value is False . KeystoneOpenIdcIdpName The name associated with the IdP in OpenStack Identity (keystone). KeystoneOpenIdcIntrospectionEndpoint OAuth 2.0 introspection endpoint for mod_auth_openidc. KeystoneOpenIdcProviderMetadataUrl The url that points to your OpenID Connect provider metadata. KeystoneOpenIdcRemoteIdAttribute Attribute to be used to obtain the entity ID of the Identity Provider from the environment. The default value is HTTP_OIDC_ISS . KeystoneOpenIdcResponseType Response type to be expected from the OpenID Connect provider. The default value is id_token . KeystonePasswordExpiresDays The number of days for which a password will be considered valid before requiring it to be changed. KeystonePasswordRegex The regular expression used to validate password strength requirements. KeystonePasswordRegexDescription Describe your password regular expression here in language for humans. KeystoneSSLCertificate OpenStack Identity (keystone) certificate for verifying token validity. KeystoneSSLCertificateKey OpenStack Identity (keystone) key for signing tokens. KeystoneTokenProvider The OpenStack Identity (keystone) token format. The default value is fernet . KeystoneTrustedDashboards A list of dashboard URLs trusted for single sign-on. KeystoneUniqueLastPasswordCount This controls the number of user password iterations to keep in history, in order to enforce that newly created passwords are unique. KeystoneWorkers Set the number of workers for the OpenStack Identity (keystone) service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. The default value is %{::os_workers_keystone} . ManageKeystoneFernetKeys Whether director should manage the OpenStack Identity (keystone) fernet keys or not. If set to True, the fernet keys will get the values from the saved keys repository in OpenStack Workflow (mistral) from the KeystoneFernetKeys variable. If set to false, only the stack creation initializes the keys, but subsequent updates will not touch them. The default value is True . NotificationDriver Driver or drivers to handle sending notifications. The default value is messagingv2 . TokenExpiration Set a token expiration time in seconds. The default value is 3600 . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/identity-keystone-parameters |
Extension APIs | Extension APIs OpenShift Container Platform 4.17 Reference guide for extension APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/extension_apis/index |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/testing_guide_camel_k/pr01 |
DM Multipath | DM Multipath Red Hat Enterprise Linux 7 Configuring and managing Device Mapper Multipath Steven Levine Red Hat Customer Content Services [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/index |
11.2.3. Mail User Agent | 11.2.3. Mail User Agent A Mail User Agent ( MUA ) is synonymous with an email client application. An MUA is a program that, at the very least, allows a user to read and compose email messages. Many MUAs are capable of retrieving messages via the POP or IMAP protocols, setting up mailboxes to store messages, and sending outbound messages to an MTA. MUAs may be graphical, such as Mozilla Mail , or have a very simple, text-based interface, such as mutt . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-email-types-mua |
Chapter 6. Working with Helm charts | Chapter 6. Working with Helm charts 6.1. Understanding Helm Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Container Platform resources. Creating a chart in a cluster creates a running instance of the chart known as a release . Each time a chart is created, or a release is upgraded or rolled back, an incremental revision is created. 6.1.1. Key features Helm provides the ability to: Search through a large collection of charts stored in the chart repository. Modify existing charts. Create your own charts with OpenShift Container Platform or Kubernetes resources. Package and share your applications as charts. 6.1.2. Red Hat Certification of Helm charts for OpenShift You can choose to verify and certify your Helm charts by Red Hat for all the components you will be deploying on the Red Hat OpenShift Container Platform. Charts go through an automated Red Hat OpenShift certification workflow that guarantees security compliance as well as best integration and experience with the platform. Certification assures the integrity of the chart and ensures that the Helm chart works seamlessly on Red Hat OpenShift clusters. 6.1.3. Additional resources For more information on how to certify your Helm charts as a Red Hat partner, see Red Hat Certification of Helm charts for OpenShift . For more information on OpenShift and Container certification guides for Red Hat partners, see Partner Guide for OpenShift and Container Certification . For a list of the charts, see the Red Hat Helm index file . You can view the available charts at the Red Hat Marketplace . For more information, see Using the Red Hat Marketplace . 6.2. Installing Helm The following section describes how to install Helm on different platforms using the CLI. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Prerequisites You have installed Go, version 1.13 or higher. 6.2.1. On Linux Download the Helm binary and add it to your path: Linux (x86_64, amd64) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm Linux on IBM Power(R) (ppc64le) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 6.2.2. On Windows 7/8 Download the latest .exe file and put in a directory of your preference. Right click Start and click Control Panel . Select System and Security and then click System . From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom. Select Path from the Variable section and click Edit . Click New and type the path to the folder with the .exe file into the field or click Browse and select the directory, and click OK . 6.2.3. On Windows 10 Download the latest .exe file and put in a directory of your preference. Click Search and type env or environment . Select Edit environment variables for your account . Select Path from the Variable section and click Edit . Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK . 6.2.4. On MacOS Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 6.3. Configuring custom Helm chart repositories You can create Helm releases on an OpenShift Container Platform cluster using the following methods: The CLI. The Developer perspective of the web console. The Developer Catalog , in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see the Red Hat Helm index file . As a cluster administrator, you can add multiple cluster-scoped and namespace-scoped Helm chart repositories, separate from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . As a regular user or project member with the appropriate role-based access control (RBAC) permissions, you can add multiple namespace-scoped Helm chart repositories, apart from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . In the Developer perspective of the web console, you can use the Helm page to: Create Helm Releases and Repositories using the Create button. Create, update, or delete a cluster-scoped or namespace-scoped Helm chart repository. View the list of the existing Helm chart repositories in the Repositories tab, which can also be easily distinguished as either cluster scoped or namespace scoped. 6.3.1. Installing a Helm chart on an OpenShift Container Platform cluster Prerequisites You have a running OpenShift Container Platform cluster and you have logged into it. You have installed Helm. Procedure Create a new project: USD oc new-project vault Add a repository of Helm charts to your local Helm client: USD helm repo add openshift-helm-charts https://charts.openshift.io/ Example output "openshift-helm-charts" has been added to your repositories Update the repository: USD helm repo update Install an example HashiCorp Vault: USD helm install example-vault openshift-helm-charts/hashicorp-vault Example output NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault! Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2 6.3.2. Creating Helm releases using the Developer perspective You can use either the Developer perspective in the web console or the CLI to select and create a release from the Helm charts listed in the Developer Catalog . You can create Helm releases by installing Helm charts and see them in the Developer perspective of the web console. Prerequisites You have logged in to the web console and have switched to the Developer perspective . Procedure To create Helm releases from the Helm charts provided in the Developer Catalog : In the Developer perspective, navigate to the +Add view and select a project. Then click Helm Chart option to see all the Helm Charts in the Developer Catalog . Select a chart and read the description, README, and other details about the chart. Click Create . Figure 6.1. Helm charts in developer catalog In the Create Helm Release page: Enter a unique name for the release in the Release Name field. Select the required chart version from the Chart Version drop-down list. Configure your Helm chart by using the Form View or the YAML View . Note Where available, you can switch between the YAML View and Form View . The data is persisted when switching between the views. Click Create to create a Helm release. The web console displays the new release in the Topology view. If a Helm chart has release notes, the web console displays them. If a Helm chart creates workloads, the web console displays them on the Topology or Helm release details page. The workloads are DaemonSet , CronJob , Pod , Deployment , and DeploymentConfig . View the newly created Helm release in the Helm Releases page. You can upgrade, rollback, or delete a Helm release by using the Actions button on the side panel or by right-clicking a Helm release. 6.3.3. Using Helm in the web terminal You can use Helm by Accessing the web terminal in the Developer perspective of the web console. 6.3.4. Creating a custom Helm chart on OpenShift Container Platform Procedure Create a new project: USD oc new-project nodejs-ex-k Download an example Node.js chart that contains OpenShift Container Platform objects: USD git clone https://github.com/redhat-developer/redhat-helm-charts Go to the directory with the sample chart: USD cd redhat-helm-charts/alpha/nodejs-ex-k/ Edit the Chart.yaml file and add a description of your chart: apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5 1 The chart API version. It should be v2 for Helm charts that require at least Helm 3. 2 The name of your chart. 3 The description of your chart. 4 The URL to an image to be used as an icon. 5 The Version of your chart as per the Semantic Versioning (SemVer) 2.0.0 Specification. Verify that the chart is formatted properly: USD helm lint Example output [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed Navigate to the directory level: USD cd .. Install the chart: USD helm install nodejs-chart nodejs-ex-k Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0 6.3.5. Adding custom Helm chart repositories As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster. Sample Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository, run: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 6.2. Chart repositories filter Note If a cluster administrator removes all of the chart repositories, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel. 6.3.6. Adding namespace-scoped custom Helm chart repositories The cluster-scoped HelmChartRepository custom resource definition (CRD) for Helm repository provides the ability for administrators to add Helm repositories as custom resources. The namespace-scoped ProjectHelmChartRepository CRD allows project members with the appropriate role-based access control (RBAC) permissions to create Helm repository resources of their choice but scoped to their namespace. Such project members can see charts from both cluster-scoped and namespace-scoped Helm repository resources. Note Administrators can limit users from creating namespace-scoped Helm repository resources. By limiting users, administrators have the flexibility to control the RBAC through a namespace role instead of a cluster role. This avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications. The addition of the namespace-scoped Helm repository does not impact the behavior of the existing cluster-scoped Helm repository. As a regular user or project member with the appropriate RBAC permissions, you can add custom namespace-scoped Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new namespace-scoped Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your namespace. Sample Namespace-scoped Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository scoped to your my-namespace namespace, run: USD cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF The output verifies that the namespace-scoped Helm Chart Repository CR is created: Example output Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed in your my-namespace namespace. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 6.3. Chart repositories filter in your namespace Alternatively, run: USD oc get projecthelmchartrepositories --namespace my-namespace Example output Note If a cluster administrator or a regular user with appropriate RBAC permissions removes all of the chart repositories in a specific namespace, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel for that specific namespace. 6.3.7. Creating credentials and CA certificates to add Helm chart repositories Some Helm chart repositories need credentials and custom certificate authority (CA) certificates to connect to it. You can use the web console as well as the CLI to add credentials and certificates. Procedure To configure the credentials and certificates, and then add a Helm chart repository using the CLI: In the openshift-config namespace, create a ConfigMap object with a custom CA certificate in PEM encoded format, and store it under the ca-bundle.crt key within the config map: USD oc create configmap helm-ca-cert \ --from-file=ca-bundle.crt=/path/to/certs/ca.crt \ -n openshift-config In the openshift-config namespace, create a Secret object to add the client TLS configurations: USD oc create secret tls helm-tls-configs \ --cert=/path/to/certs/client.crt \ --key=/path/to/certs/client.key \ -n openshift-config Note that the client certificate and key must be in PEM encoded format and stored under the keys tls.crt and tls.key , respectively. Add the Helm repository as follows: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF The ConfigMap and Secret are consumed in the HelmChartRepository CR using the tlsConfig and ca fields. These certificates are used to connect to the Helm repository URL. By default, all authenticated users have access to all configured charts. However, for chart repositories where certificates are needed, you must provide users with read access to the helm-ca-cert config map and helm-tls-configs secret in the openshift-config namespace, as follows: USD cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["helm-ca-cert"] verbs: ["get"] - apiGroups: [""] resources: ["secrets"] resourceNames: ["helm-tls-configs"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF 6.3.8. Filtering Helm Charts by their certification level You can filter Helm charts based on their certification level in the Developer Catalog . Procedure In the Developer perspective, navigate to the +Add view and select a project. From the Developer Catalog tile, select the Helm Chart option to see all the Helm charts in the Developer Catalog . Use the filters to the left of the list of Helm charts to filter the required charts: Use the Chart Repositories filter to filter charts provided by Red Hat Certification Charts or OpenShift Helm Charts . Use the Source filter to filter charts sourced from Partners , Community , or Red Hat . Certified charts are indicated with the ( ) icon. Note The Source filter will not be visible when there is only one provider type. You can now select the required chart and install it. 6.3.9. Disabling Helm Chart repositories You can disable Helm Charts from a particular Helm Chart Repository in the catalog by setting the disabled property in the HelmChartRepository custom resource to true . Procedure To disable a Helm Chart repository by using CLI, add the disabled: true flag to the custom resource. For example, to remove an Azure sample chart repository, run: To disable a recently added Helm Chart repository by using Web Console: Go to Custom Resource Definitions and search for the HelmChartRepository custom resource. Go to Instances , find the repository you want to disable, and click its name. Go to the YAML tab, add the disabled: true flag in the spec section, and click Save . Example The repository is now disabled and will not appear in the catalog. 6.4. Working with Helm releases You can use the Developer perspective in the web console to update, rollback, or delete a Helm release. 6.4.1. Prerequisites You have logged in to the web console and have switched to the Developer perspective . 6.4.2. Upgrading a Helm release You can upgrade a Helm release to upgrade to a new chart version or update your release configuration. Procedure In the Topology view, select the Helm release to see the side panel. Click Actions Upgrade Helm Release . In the Upgrade Helm Release page, select the Chart Version you want to upgrade to, and then click Upgrade to create another Helm release. The Helm Releases page displays the two revisions. 6.4.3. Rolling back a Helm release If a release fails, you can rollback the Helm release to a version. Procedure To rollback a release using the Helm view: In the Developer perspective, navigate to the Helm view to see the Helm Releases in the namespace. Click the Options menu adjoining the listed release, and select Rollback . In the Rollback Helm Release page, select the Revision you want to rollback to and click Rollback . In the Helm Releases page, click on the chart to see the details and resources for that release. Go to the Revision History tab to see all the revisions for the chart. Figure 6.4. Helm revision history If required, you can further use the Options menu adjoining a particular revision and select the revision to rollback to. 6.4.4. Deleting a Helm release Procedure In the Topology view, right-click the Helm release and select Delete Helm Release . In the confirmation prompt, enter the name of the chart and click Delete . | [
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"projecthelmchartrepository.helm.openshift.io/azure-sample-repo created",
"oc get projecthelmchartrepositories --namespace my-namespace",
"NAME AGE azure-sample-repo 1m",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/building_applications/working-with-helm-charts |
Chapter 4. Configuring kernel command-line parameters | Chapter 4. Configuring kernel command-line parameters With kernel command-line parameters, you can change the behavior of certain aspects of the Red Hat Enterprise Linux kernel at boot time. As a system administrator, you control which options get set at boot. Note that certain kernel behaviors can only be set at boot time. Important Changing the behavior of the system by modifying kernel command-line parameters can have negative effects on your system. Always test changes before deploying them in production. For further guidance, contact Red Hat Support. 4.1. What are kernel command-line parameters With kernel command-line parameters, you can overwrite default values and set specific hardware settings. At boot time, you can configure the following features: The Red Hat Enterprise Linux kernel The initial RAM disk The user space features By default, the kernel command-line parameters for systems using the GRUB boot loader are defined in the boot entry configuration file for each kernel boot entry. You can manipulate boot loader configuration files by using the grubby utility. With grubby , you can perform these actions: Change the default boot entry. Add or remove arguments from a GRUB menu entry. Additional resources kernel-command-line(7) , bootparam(7) and dracut.cmdline(7) manual pages How to install and boot custom kernels in Red Hat Enterprise Linux 8 The grubby(8) manual page 4.2. Understanding boot entries A boot entry is a collection of options stored in a configuration file and tied to a particular kernel version. In practice, you have at least as many boot entries as your system has installed kernels. The boot entry configuration file is located in the /boot/loader/entries/ directory: The file name above consists of a machine ID stored in the /etc/machine-id file, and a kernel version. The boot entry configuration file contains information about the kernel version, the initial ramdisk image, and the kernel command-line parameters. The example contents of a boot entry config can be seen below: 4.3. Changing kernel command-line parameters for all boot entries Change kernel command-line parameters for all boot entries on your system. Important When installing a newer version of the kernel in RHEL 9 systems, the grubby tool passes the kernel command-line arguments from the kernel version. However, this does not apply to RHEL version 9.0 in which newly installed kernels lose command-line options. You must run the grub2-mkconfig command on the newly installed kernel to pass the parameters to your new kernel. For more information about this known issue, see Boot loader . Prerequisites grubby utility is installed on your system. zipl utility is installed on your IBM Z system. Procedure To add a parameter: For systems that use the GRUB boot loader and, on IBM Z that use the zIPL boot loader, the command adds a new kernel parameter to each /boot/loader/entries/< ENTRY >.conf file. On IBM Z, update the boot menu: To remove a parameter: On IBM Z, update the boot menu: Additional resources What are kernel command-line parameters grubby(8) and zipl(8) manual pages 4.4. Changing kernel command-line parameters for a single boot entry Make changes in kernel command-line parameters for a single boot entry on your system. Prerequisites grubby and zipl utilities are installed on your system. Procedure To add a parameter: On IBM Z, update the boot menu: To remove a parameter: On IBM Z, update the boot menu: Important grubby modifies and stores the kernel command-line parameters of an individual kernel boot entry in the /boot/loader/entries/< ENTRY >.conf file. 4.5. Changing kernel command-line parameters temporarily at boot time Make temporary changes to a Kernel Menu Entry by changing the kernel parameters only during a single boot process. Note This procedure applies only for a single boot and does not persistently make the changes. Procedure Boot into the GRUB boot menu. Select the kernel you want to start. Press the e key to edit the kernel parameters. Find the kernel command line by moving the cursor down. The kernel command line starts with linux on 64-Bit IBM Power Series and x86-64 BIOS-based systems, or linuxefi on UEFI systems. Move the cursor to the end of the line. Note Press Ctrl + a to jump to the start of the line and Ctrl + e to jump to the end of the line. On some systems, Home and End keys might also work. Edit the kernel parameters as required. For example, to run the system in emergency mode, add the emergency parameter at the end of the linux line: To enable the system messages, remove the rhgb and quiet parameters. Press Ctrl + x to boot with the selected kernel and the modified command line parameters. Important If you press the Esc key to leave command line editing, it will drop all the user made changes. 4.6. Configuring GRUB settings to enable serial console connection The serial console is beneficial when you need to connect to a headless server or an embedded system and the network is down. Or when you need to avoid security rules and obtain login access on a different system. You need to configure some default GRUB settings to use the serial console connection. Prerequisites You have root permissions. Procedure Add the following two lines to the /etc/default/grub file: The first line disables the graphical terminal. The GRUB_TERMINAL key overrides values of GRUB_TERMINAL_INPUT and GRUB_TERMINAL_OUTPUT keys. The second line adjusts the baud rate ( --speed ), parity and other values to fit your environment and hardware. Note that a much higher baud rate, for example 115200, is preferable for tasks such as following log files. Update the GRUB configuration file. On BIOS-based machines: On UEFI-based machines: Reboot the system for the changes to take effect. 4.7. Changing boot entries with the GRUB configuration file The /etc/default/grub GRUB configuration file contains the GRUB_CMDLINE_LINUX key, which lists kernel command-line arguments to add to boot entries for the Linux kernel. For example: To change the boot entries, overwrite Boot Loader Specification (BLS) snippets with the contents of the GRUB_CMDLINE_LINUX values. Prerequisites A fresh RHEL 9 installation. Procedure Add or remove a kernel parameter for individual kernels in a post installation script with grubby : For example, add the noapic parameter to the chosen kernel: The parameter is propagated into the BLS snippets, but not into the /etc/default/grub file. Overwrite BLS snippets with the contents of the GRUB_CMDLINE_LINUX values present in the /etc/default/grub file: Note Other changes, such as changes made to GRUB_TIMEOUT key (also included in the /etc/default/grub GRUB configuration file), do get propagated to the new grub.cfg by default. Verification Reboot your operating system. Verify that the parameters are included in the /proc/cmdline file. For example, /proc/cmdline contains the noapic kernel parameter: | [
"d8712ab6d4f14683c5625e87b52b6b6e-5.14.0-1.el9.x86_64.conf",
"title Red Hat Enterprise Linux (5.14.0-1.el9.x86_64) 9.0 (Plow) version 5.14.0-1.el9.x86_64 linux /vmlinuz-5.14.0-1.el9.x86_64 initrd /initramfs-5.14.0-1.el9.x86_64.img options root=/dev/mapper/rhel_kvm--02--guest08-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel_kvm--02--guest08-swap rd.lvm.lv=rhel_kvm-02-guest08/root rd.lvm.lv=rhel_kvm-02-guest08/swap console=ttyS0,115200 grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"grubby --update-kernel=ALL --args=\"< NEW_PARAMETER >\"",
"zipl",
"grubby --update-kernel=ALL --remove-args=\"< PARAMETER_TO_REMOVE >\"",
"zipl",
"grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"< NEW_PARAMETER >\"",
"zipl",
"grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --remove-args=\"< PARAMETER_TO_REMOVE >\"",
"zipl",
"linux (USDroot)/vmlinuz-5.14.0-63.el9.x86_64 root=/dev/mapper/rhel-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet emergency",
"GRUB_TERMINAL=\"serial\" GRUB_SERIAL_COMMAND=\"serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1\"",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"GRUB_CMDLINE_LINUX=\"crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap\"",
"grubby --update-kernel < PATH_TO_KERNEL > --args \"< NEW_ARGUMENTS >\"",
"grubby --update-kernel /boot/vmlinuz-5.14.0-362.8.1.el9_3.x86_64 --args \"noapic\"",
"grub2-mkconfig -o /boot/grub2/grub.cfg --update-bls-cmdline Generating grub configuration file ... Adding boot menu entry for UEFI Firmware Settings ... done",
"BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-425.3.1.el8.x86_64 root=/dev/mapper/RHELCSB-Root ro vconsole.keymap=us crashkernel=auto rd.lvm.lv=RHELCSB/Root rd.luks.uuid=luks-d8a28c4c-96aa-4319-be26-96896272151d rhgb quiet noapic rd.luks.key=d8a28c4c-96aa-4319-be26-96896272151d=/keyfile:UUID=c47d962e-4be8-41d6-8216-8cf7a0d3b911 ipv6.disable=1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel |
Chapter 7. Uninstalling a cluster on Alibaba Cloud | Chapter 7. Uninstalling a cluster on Alibaba Cloud You can remove a cluster that you deployed to Alibaba Cloud. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with user-provisioned infrastructure clusters. There might be resources that the installation program did not create or that the installation program is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_alibaba/uninstalling-cluster-alibaba |
22.3. The USB Filter Editor | 22.3. The USB Filter Editor 22.3.1. Installing the USB Filter Editor The USB Filter Editor is a Windows tool used to configure the usbfilter.txt policy file. The policy rules defined in this file allow or deny automatic passthrough of specific USB devices from client machines to virtual machines managed using the Red Hat Virtualization Manager. The policy file resides on the Red Hat Virtualization Manager in the following location: /etc/ovirt-engine/usbfilter.txt Changes to USB filter policies do not take effect unless the ovirt-engine service on the Red Hat Virtualization Manager is restarted. Download the USB Filter Editor installer from "Installers and Images for Red Hat Virtualization Manager" . Installing the USB Filter Editor On a Windows machine, run the .msi file you downloaded for the USB Filter Editor . Follow the steps of the installation wizard. Unless otherwise specified, the USB Filter Editor will be installed by default in either C:\Program Files\RedHat\USB Filter Editor or C:\Program Files(x86)\RedHat\USB Filter Editor depending on your version of Windows. A USB Filter Editor shortcut icon is created on your desktop. Important Use a Secure Copy (SCP) client to import and export filter policies from the Red Hat Virtualization Manager. A Secure Copy tool for Windows machines is WinSCP ( http://winscp.net ). The default USB device policy provides virtual machines with basic access to USB devices; update the policy to allow the use of additional USB devices. 22.3.2. The USB Filter Editor Interface Double-click the USB Filter Editor shortcut icon on your desktop. The Red Hat USB Filter Editor interface displays the Class , Vendor , Product , Revision , and Action for each USB device. Permitted USB devices are set to Allow in the Action column; prohibited devices are set to Block . Table 22.1. USB Editor Fields Name Description Class Type of USB device; for example, printers, mass storage controllers. Vendor The manufacturer of the selected type of device. Product The specific USB device model. Revision The revision of the product. Action Allow or block the specified device. The USB device policy rules are processed in their listed order. Use the Up and Down buttons to move rules higher or lower in the list. The universal Block rule needs to remain as the lowest entry to ensure all USB devices are denied unless explicitly allowed in the USB Filter Editor. 22.3.3. Adding a USB Policy Double-click the USB Filter Editor shortcut icon on your desktop to open the editor. Adding a USB Policy Click Add . Use the USB Class , Vendor ID , Product ID , and Revision check boxes and lists to specify the device. Click the Allow button to permit virtual machines use of the USB device; click the Block button to prohibit the USB device from virtual machines. Click OK to add the selected filter rule to the list and close the window. Example 22.1. Adding a Device The following is an example of how to add USB Class Smartcard , device EP-1427X-2 Ethernet Adapter , from manufacturer Acer Communications & Multimedia to the list of allowed devices. Click File Save to save the changes. You have added a USB policy to the USB Filter Editor. USB filter policies must be exported to the Red Hat Virtualization Manager to take effect. 22.3.4. Removing a USB Policy Double-click the USB Filter Editor shortcut icon on your desktop to open the editor. Removing a USB Policy Select the policy to be removed. Click Remove . A message displays prompting you to confirm that you want to remove the policy. Click Yes to confirm that you want to remove the policy. Click File Save to save the changes. You have removed a USB policy from the USB Filter Editor. USB filter policies must be exported to the Red Hat Virtualization Manager to take effect. 22.3.5. Searching for USB Device Policies Search for attached USB devices to either allow or block them in the USB Filter Editor. Double-click the USB Filter Editor shortcut icon on your desktop to open the editor. Searching for USB Device Policies Click Search . The Attached USB Devices window displays a list of all the attached devices. Select the device and click Allow or Block as appropriate. Double-click the selected device to close the window. A policy rule for the device is added to the list. Use the Up and Down buttons to change the position of the new policy rule in the list. Click File Save to save the changes. You have searched the attached USB devices. USB filter policies need to be exported to the Red Hat Virtualization Manager to take effect. 22.3.6. Exporting a USB Policy USB device policy changes need to be exported and uploaded to the Red Hat Virtualization Manager for the updated policy to take effect. Upload the policy and restart the ovirt-engine service. Double-click the USB Filter Editor shortcut icon on your desktop to open the editor. Exporting a USB Policy Click Export ; the Save As window opens. Save the file with a file name of usbfilter.txt . Using a Secure Copy client, such as WinSCP, upload the usbfilter.txt file to the server running Red Hat Virtualization Manager. The file must be placed in the following directory on the server: /etc/ovirt-engine/ As the root user on the server running Red Hat Virtualization Manager, restart the ovirt-engine service. 22.3.7. Importing a USB Policy An existing USB device policy must be downloaded and imported into the USB Filter Editor before you can edit it. Importing a USB Policy Using a Secure Copy client, such as WinSCP, download the usbfilter.txt file from the server running Red Hat Virtualization Manager. The file can be found in the following directory on the server: /etc/ovirt-engine/ Double-click the USB Filter Editor shortcut icon on your desktop to open the editor. Click Import to open the Open window. Open the usbfilter.txt file that was downloaded from the server. | [
"systemctl restart ovirt-engine.service"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-the_usb_filter_editor |
Chapter 4. Custom user attributes | Chapter 4. Custom user attributes You can add custom user attributes to the registration page and account management console with a custom theme. 4.1. Registration page Use this procedure to enter custom attributes in the registration page. Procedure Copy the template themes/base/login/register.ftl to the login type of your custom theme. Open the copy in an editor. For example, to add a mobile number to the registration page, add the following snippet to the form: <div class="form-group"> <div class="USD{properties.kcLabelWrapperClass!}"> <label for="user.attributes.mobile" class="USD{properties.kcLabelClass!}">Mobile number</label> </div> <div class="USD{properties.kcInputWrapperClass!}"> <input type="text" class="USD{properties.kcInputClass!}" id="user.attributes.mobile" name="user.attributes.mobile" value="USD{(register.formData['user.attributes.mobile']!'')}"/> </div> </div> Ensure the name of the input html element starts with user.attributes . In the example above, the attribute will be stored by Red Hat Single Sign-On with the name mobile . To see the changes, make sure your realm is using your custom theme for the login theme and open the registration page. 4.2. Account Management Console Use this procedure to manage custom attributes in the user profile page in the account management console. Procedure Copy the template themes/base/account/account.ftl to the account type of your custom theme. Open the copy in an editor. As an example to add a mobile number to the account page add the following snippet to the form: <div class="form-group"> <div class="col-sm-2 col-md-2"> <label for="user.attributes.mobile" class="control-label">Mobile number</label> </div> <div class="col-sm-10 col-md-10"> <input type="text" class="form-control" id="user.attributes.mobile" name="user.attributes.mobile" value="USD{(account.attributes.mobile!'')}"/> </div> </div> Ensure the name of the input html element starts with user.attributes . To see the changes, make sure your realm is using your custom theme for the account theme and open the user profile page in the account management console. 4.3. Additional resources See Themes for how to create a custom theme. | [
"<div class=\"form-group\"> <div class=\"USD{properties.kcLabelWrapperClass!}\"> <label for=\"user.attributes.mobile\" class=\"USD{properties.kcLabelClass!}\">Mobile number</label> </div> <div class=\"USD{properties.kcInputWrapperClass!}\"> <input type=\"text\" class=\"USD{properties.kcInputClass!}\" id=\"user.attributes.mobile\" name=\"user.attributes.mobile\" value=\"USD{(register.formData['user.attributes.mobile']!'')}\"/> </div> </div>",
"<div class=\"form-group\"> <div class=\"col-sm-2 col-md-2\"> <label for=\"user.attributes.mobile\" class=\"control-label\">Mobile number</label> </div> <div class=\"col-sm-10 col-md-10\"> <input type=\"text\" class=\"form-control\" id=\"user.attributes.mobile\" name=\"user.attributes.mobile\" value=\"USD{(account.attributes.mobile!'')}\"/> </div> </div>"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_developer_guide/custom_user_attributes |
Chapter 54. JQ | Chapter 54. JQ Since Camel 3.18 Camel supports JQ to allow using Expression or Predicate on JSON messages. 54.1. Dependencies When using jq with Red Hat build of Camel Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jq-starter</artifactId> </dependency> 54.2. JQ Options The JQ language supports 4 options, which are listed below. Name Default Java Type Description headerName String Name of header to use as input, instead of the message body It has as higher precedent than the propertyName if both are set. propertyName String Name of property to use as input, instead of the message body. It has a lower precedent than the headerName if both are set. resultType String Sets the class of the result type (type from output). trim true Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 54.3. Examples For example, you can use JQ in a Predicate with the Content Based Router EIP . from("queue:books.new") .choice() .when().jq(".store.book.price < 10)") .to("jms:queue:book.cheap") .when().jq(".store.book.price < 30)") .to("jms:queue:book.average") .otherwise() .to("jms:queue:book.expensive"); 54.4. Message body types Camel JQ leverages camel-jackson for type conversion. To enable camel-jackson POJO type conversion, refer to the Camel Jackson documentation. 54.5. Using header as input By default, JQ uses the message body as the input source. However, you can also use a header as input by specifying the headerName option. For example to count the number of books from a JSON document that was stored in a header named books you can do: from("direct:start") .setHeader("numberOfBooks") .jq(".store.books | length", int.class, "books") .to("mock:result"); 54.6. Camel supplied JQ Functions The camel-jq adds the following functions: header - Allow to access the Message header in a JQ expression. For example, to set the property foo with the value from the Message header `MyHeader': from("direct:start") .transform() .jq(".foo = header(\"MyHeader\")") .to("mock:result"); 54.7. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.language.jq.enabled Whether to enable auto configuration of the jq language. This is enabled by default. Boolean camel.language.jq.header-name Name of header to use as input, instead of the message body It has as higher precedent than the propertyName if both are set. String camel.language.jq.property-name Name of property to use as input, instead of the message body. It has a lower precedent than the headerName if both are set. String camel.language.jq.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jq-starter</artifactId> </dependency>",
"from(\"queue:books.new\") .choice() .when().jq(\".store.book.price < 10)\") .to(\"jms:queue:book.cheap\") .when().jq(\".store.book.price < 30)\") .to(\"jms:queue:book.average\") .otherwise() .to(\"jms:queue:book.expensive\");",
"from(\"direct:start\") .setHeader(\"numberOfBooks\") .jq(\".store.books | length\", int.class, \"books\") .to(\"mock:result\");",
"from(\"direct:start\") .transform() .jq(\".foo = header(\\\"MyHeader\\\")\") .to(\"mock:result\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jq-language-component-starter |
Chapter 6. Configure key based SSH authentication without a password | Chapter 6. Configure key based SSH authentication without a password Configure key-based SSH authentication without a password for the root user from the host to itself. 6.1. Generating SSH key pairs without a password Generating a public/private key pair lets you use key-based SSH authentication. Generating a key pair that does not use a password makes it simpler to use Ansible to automate deployment and configuration processes. Procedure Log in to the first hyperconverged host as the root user. Generate an SSH key that does not use a password. Start the key generation process. Enter a location for the key. The default location, shown in parentheses, is used if no other input is provided. Specify and confirm an empty passphrase by pressing Enter twice. The private key is saved in <location>/<keyname> . The public key is saved in <location>/<keyname>.pub . Warning Your identification in this output is your private key. Never share your private key. Possession of your private key allows someone else to impersonate you on any system that has your public key. 6.2. Copying SSH keys To access a host using your private key, that host needs a copy of your public key. Prerequisites Generate a public/private key pair with no password. Procedure Log in to the host as the root user. Copy the public key to the same host: Enter the password for root@<hostname> when prompted. Warning Make sure that you use the file that ends in .pub . Never share your private key. Possession of your private key allows someone else to impersonate you on any system that has your public key. For example, if you are logged in as the root user on server1.example.com , you would run the following commands: | [
"ssh-keygen -t rsa Generating public/private rsa key pair.",
"Enter file in which to save the key (/home/username/.ssh/id_rsa): <location>/<keyname>",
"Enter passphrase (empty for no passphrase): Enter same passphrase again:",
"Your identification has been saved in <location>/<keyname>. Your public key has been saved in <location>/<keyname>.pub. The key fingerprint is SHA256:8BhZageKrLXM99z5f/AM9aPo/KAUd8ZZFPcPFWqK6+M [email protected] The key's randomart image is: +---[ECDSA 256]---+ | . . +=| | . . . = o.o| | + . * . o...| | = . . * . + +..| |. + . . So o * ..| | . o . .+ = ..| | o oo ..=. .| | ooo...+ | | .E++oo | +----[SHA256]-----+",
"ssh-copy-id -i <location>/<keyname>.pub root@<hostname>",
"ssh-copy-id -i <location>/<keyname>.pub [email protected]"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/task-configure-key-based-ssh-auth-single-node |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 1.0-31 Thu Apr 28 2016 Jana Heves Red Hat Enterprise Linux 6.8 GA release of the Resource Management Guide Revision 1.0-29 Tue May 12 2015 Radek Biba Asynchronous update. Revision 1.0-26 Thu Oct 10 2014 Peter Ondrejka Red Hat Enterprise Linux 6.6 GA release of the Resource Management Guide Revision 1.0-16 Thu Feb 21 2013 Martin Prpic Red Hat Enterprise Linux 6.4 GA release of the Resource Management Guide Revision 1.0-7 Wed Jun 20 2012 Martin Prpic Red Hat Enterprise Linux 6.3 GA release of the Resource Management Guide . Revision 1.0-6 Tue Dec 6 2011 Martin Prpic Red Hat Enterprise Linux 6.2 GA release of the Resource Management Guide Revision 1.0-5 Thu May 19 2011 Martin Prpic Red Hat Enterprise Linux 6.1 GA release of the Resource Management Guide Revision 1.0-0 Tue Nov 9 2010 Rudiger Landmann Feature-complete version for GA | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/appe-resource_management_guide-revision_history |
Chapter 4. Administrator tasks | Chapter 4. Administrator tasks 4.1. Adding Operators to a cluster Cluster administrators can install Operators to an OpenShift Container Platform cluster by subscribing Operators to namespaces with OperatorHub. 4.1.1. Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 4.1.2. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select one of the following: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 4.1.3. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources About Operator groups 4.1.4. Installing a specific version of an Operator You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions OpenShift CLI ( oc ) installed Procedure Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0: Subscription with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. Create the Subscription object: USD oc apply -f sub.yaml Manually approve the pending install plan to complete the Operator installation. Additional resources Manually approving a pending Operator upgrade 4.1.5. Pod placement of Operator workloads By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes. Controlling pod placement of Operator and Operand workloads has the following prerequisites: Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as node-role.kubernetes.io/app , that identifies the node or nodes. Otherwise, add a label, such as myoperator , by using a machine set or editing the node directly. You will use this label in a later step as the node selector on your project. If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a myoperator:NoSchedule taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain. Create a project that is configured with a default node selector and, if you added a taint, a matching toleration. At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios: For Operator pods Administrators can create a Subscription object in the project. As a result, the Operator pods are placed on the specified nodes. For Operand pods Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply. Additional resources Adding taints and tolerations manually to nodes or with machine sets Creating project-wide node selectors Creating a project with a node selector and toleration 4.2. Upgrading installed Operators As a cluster administrator, you can upgrade Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster. 4.2.1. Changing the update channel for an Operator The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Operator to start tracking and receiving updates from a newer channel, you can change the update channel in the subscription. The names of update channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note Installed Operators cannot change to a channel that is older than the current channel. If the approval strategy in the subscription is set to Automatic , the upgrade process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending upgrades. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the upgrade begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the upgrade from the Subscription tab. 4.2.2. Manually approving a pending Operator upgrade If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending upgrade display a status with Upgrade available . Click the name of the Operator you want to upgrade. Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . 4.3. Deleting Operators from a cluster The following describes how to delete Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster. 4.3.1. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find the Operator you want. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed, reminding you that: Removing the Operator will not remove any of its custom resource definitions or managed resources. If your Operator has deployed applications on the cluster or configured off-cluster resources, these will continue to run and need to be cleaned up manually. This action removes the Operator as well as the Operator deployments and pods, if any. Any Operands, and resources managed by the Operator, including CRDs and CRs, are not removed. The web console enables dashboards and navigation items for some Operators. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. Select Uninstall . This Operator stops running and no longer receives updates. 4.3.2. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed Operator (for example, jaeger ) in the currentCSV field: USD oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV Example output currentCSV: jaeger-operator.v1.8.2 Delete the subscription (for example, jaeger ): USD oc delete subscription jaeger -n openshift-operators Example output subscription.operators.coreos.com "jaeger" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators Example output clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted 4.3.3. Refreshing failing subscriptions In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors: Example output ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e" Example output rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade. You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator. Prerequisites You have a failing subscription that is unable to pull an inaccessible bundle image. You have confirmed that the correct bundle image is accessible. Procedure Get the names of the Subscription and ClusterServiceVersion objects from the namespace where the Operator is installed: USD oc get sub,csv -n <namespace> Example output NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded Delete the subscription: USD oc delete subscription <subscription_name> -n <namespace> Delete the cluster service version: USD oc delete csv <csv_name> -n <namespace> Get the names of any failing jobs and related config maps in the openshift-marketplace namespace: USD oc get job,configmap -n openshift-marketplace Example output NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s Delete the job: USD oc delete job <job_name> -n openshift-marketplace This ensures pods that try to pull the inaccessible image are not recreated. Delete the config map: USD oc delete configmap <configmap_name> -n openshift-marketplace Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> 4.4. Configuring proxy support in Operator Lifecycle Manager If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate. Additional resources Configuring the cluster-wide proxy Configuring a custom PKI (custom CA certificate) 4.4.1. Overriding proxy settings of an Operator If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Cluster administrators can also override these proxy settings by configuring the subscription of an Operator. Important Operators must handle setting environment variables for proxy settings in the pods for any managed Operands. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Select the Operator and click Install . On the Install Operator page, modify the Subscription object to include one or more of the following environment variables in the spec section: HTTP_PROXY HTTPS_PROXY NO_PROXY For example: Subscription object with proxy setting overrides apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide Note These environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings. OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator. Click Install to make the Operator available to the selected namespaces. After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI: USD oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2 Example output - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c ... 4.4.2. Injecting a custom CA certificate When a cluster administrator adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Custom CA certificate added to the cluster using a config map. Desired Operator installed and running on OLM. Procedure Create an empty config map in the namespace where the subscription for your Operator exists and include the following label: apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: "true" 2 1 Name of the config map. 2 Requests the Cluster Network Operator to inject the merged bundle. After creating this config map, it is immediately populated with the certificate contents of the merged bundle. Update your the Subscription object to include a spec.config section that mounts the trusted-ca config map as a volume to each container within a pod that requires a custom CA: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true 1 Add a config section if it does not exist. 2 Specify labels to match pods that are owned by the Operator. 3 Create a trusted-ca volume. 4 ca-bundle.crt is required as the config map key. 5 tls-ca-bundle.pem is required as the config map path. 6 Create a trusted-ca volume mount. 4.5. Viewing Operator status Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators. 4.5.1. Operator subscription condition types Subscriptions can report the following condition types: Table 4.1. Subscription condition types Condition Description CatalogSourcesUnhealthy Some or all of the catalog sources to be used in resolution are unhealthy. InstallPlanMissing An install plan for a subscription is missing. InstallPlanPending An install plan for a subscription is pending installation. InstallPlanFailed An install plan for a subscription has failed. Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. Additional resources Refreshing failing subscriptions 4.5.2. Viewing Operator subscription status by using the CLI You can view Operator subscription status by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List Operator subscriptions: USD oc get subs -n <operator_namespace> Use the oc describe command to inspect a Subscription resource: USD oc describe sub <subscription_name> -n <operator_namespace> In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy: Example output Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. 4.5.3. Viewing Operator catalog source status by using the CLI You can view the status of an Operator catalog source by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources: USD oc get catalogsources -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m Use the oc describe command to get more details and status about a catalog source: USD oc describe catalogsource example-catalog -n openshift-marketplace Example output Name: example-catalog Namespace: openshift-marketplace ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace In the preceding example output, the last observed state is TRANSIENT_FAILURE . This state indicates that there is a problem establishing a connection for the catalog source. List the pods in the namespace where your catalog source was created: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff . This status indicates that there is an issue pulling the catalog source's index image. Use the oc describe command to inspect a pod for more detailed information: USD oc describe pod example-catalog-bwt8z -n openshift-marketplace Example output Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull In the preceding example output, the error messages indicate that the catalog source's index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials. Additional resources Operator Lifecycle Manager concepts and resources Catalog source gRPC documentation: States of Connectivity Accessing images for Operators from private registries 4.6. Managing Operator conditions As a cluster administrator, you can manage Operator conditions by using Operator Lifecycle Manager (OLM). 4.6.1. Overriding Operator conditions As a cluster administrator, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the Spec.Overrides array override the conditions in the Status.Conditions array, allowing cluster administrators to deal with situations where an Operator is incorrectly reporting a state to Operator Lifecycle Manager (OLM). For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition type and status to the Spec.Overrides array in the OperatorCondition resource. Prerequisites An Operator with an OperatorCondition resource, installed using OLM. Procedure Edit the OperatorCondition resource for the Operator: USD oc edit operatorcondition <name> Add a Spec.Overrides array to the object: Example Operator condition override apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: "True" reason: "upgradeIsSafe" message: "This is a known issue with the Operator where it always reports that it cannot be upgraded." status: conditions: - type: Upgradeable status: "False" reason: "migration" message: "The operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Allows the cluster administrator to change the upgrade readiness to True . 4.6.2. Updating your Operator to use Operator conditions Operator Lifecycle Manager (OLM) automatically creates an OperatorCondition resource for each ClusterServiceVersion resource that it reconciles. All service accounts in the CSV are granted the RBAC to interact with the OperatorCondition owned by the Operator. An Operator author can develop their Operator to use the operator-lib library such that, after the Operator has been deployed by OLM, it can set its own conditions. For more on writing logic to set Operator conditions as an Operator author, see the Operator SDK documentation. 4.6.2.1. Setting defaults In an effort to remain backwards compatible, OLM treats the absence of an OperatorCondition resource as opting out of the condition. Therefore, an Operator that opts in to using Operator conditions should set default conditions before the ready probe for the pod is set to true . This provides the Operator with a grace period to update the condition to the correct state. 4.6.3. Additional resources Operator conditions 4.7. Allowing non-cluster administrators to install Operators Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with cluster-admin privileges. By default, Operator authors can specify any set of permissions in the cluster service version (CSV) and OLM will consequently grant it to the Operator. Cluster administrators should take measures to ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM. One method for locking this down requires cluster administrators auditing Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts. By associating an Operator group with a service account that has a set of privileges granted to it, cluster administrators can set policy on Operators to ensure they operate only within predetermined boundaries using RBAC rules. The Operator is unable to do anything that is not explicitly permitted by those rules. This self-sufficient, limited scope installation of Operators by non-cluster administrators means that more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators. 4.7.1. Understanding Operator installation policy Using Operator Lifecycle Manager (OLM), cluster administrators can choose to specify a service account for an Operator group so that all Operators associated with the group are deployed and run against the privileges granted to the service account. The APIService and CustomResourceDefinition resources are always created by OLM using the cluster-admin role. A service account associated with an Operator group should never be granted privileges to write these resources. If the specified service account does not have adequate permissions for an Operator that is being installed or upgraded, useful and contextual information is added to the status of the respective resource(s) so that it is easy for the cluster administrator to troubleshoot and resolve the issue. Any Operator tied to this Operator group is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors. 4.7.1.1. Installation scenarios When determining whether an Operator can be installed or upgraded on a cluster, Operator Lifecycle Manager (OLM) considers the following scenarios: A cluster administrator creates a new Operator group and specifies a service account. All Operator(s) associated with this Operator group are installed and run against the privileges granted to the service account. A cluster administrator creates a new Operator group and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted. For existing Operator groups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted. A cluster administrator updates an existing Operator group and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator. A service account specified by an Operator group changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator. A cluster administrator removes the service account from an Operator group. The default behavior remains and Operator installs and upgrades are permitted. 4.7.1.2. Installation workflow When an Operator group is tied to a service account and an Operator is installed or upgraded, Operator Lifecycle Manager (OLM) uses the following workflow: The given Subscription object is picked up by OLM. OLM fetches the Operator group tied to this subscription. OLM determines that the Operator group has a service account specified. OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the Operator group. OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account. 4.7.2. Scoping Operator installations To provide scoping rules to Operator installations and upgrades on Operator Lifecycle Manager (OLM), associate a service account with an Operator group. Using this example, a cluster administrator can confine a set of Operators to a designated namespace. Procedure Create a new namespace: USD cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF Allocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant role(s), and role binding(s). USD cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF The following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF Create an OperatorGroup object in the designated namespace. This Operator group targets the designated namespace to ensure that its tenancy is confined to it. In addition, Operator groups allow a user to specify a service account. Specify the service account created in the step: USD cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF Any Operator installed in the designated namespace is tied to this Operator group and therefore to the service account specified. Create a Subscription object in the designated namespace to install an Operator: USD cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF 1 Specify a catalog source that already exists in the designated namespace or one that is in the global catalog namespace. 2 Specify a namespace where the catalog source was created. Any Operator tied to this Operator group is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with relevant errors. 4.7.2.1. Fine-grained permissions Operator Lifecycle Manager (OLM) uses the service account specified in an Operator group to create or update the following resources related to the Operator being installed: ClusterServiceVersion Subscription Secret ServiceAccount Service ClusterRole and ClusterRoleBinding Role and RoleBinding To confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account: Note The following role is a generic example and additional rules might be required based on the specific Operator. kind: Role rules: - apiGroups: ["operators.coreos.com"] resources: ["subscriptions", "clusterserviceversions"] verbs: ["get", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "serviceaccounts"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["apps"] 1 resources: ["deployments"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] - apiGroups: [""] 2 resources: ["pods"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] 1 2 Add permissions to create other resources, such as deployments and pods shown here. In addition, if any Operator specifies a pull secret, the following permissions must also be added: kind: ClusterRole 1 rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- kind: Role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "update", "patch"] 1 Required to get the secret from the OLM namespace. 4.7.3. Troubleshooting permission failures If an Operator installation fails due to lack of permissions, identify the errors using the following procedure. Procedure Review the Subscription object. Its status has an object reference installPlanRef that points to the InstallPlan object that attempted to create the necessary [Cluster]Role[Binding] object(s) for the Operator: apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: "117359" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23 Check the status of the InstallPlan object for any errors: apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: "2019-07-26T21:13:10Z" lastUpdateTime: "2019-07-26T21:13:10Z" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:scoped:scoped" cannot create resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope' reason: InstallComponentFailed status: "False" type: Installed phase: Failed The error message tells you: The type of resource it failed to create, including the API group of the resource. In this case, it was clusterroles in the rbac.authorization.k8s.io group. The name of the resource. The type of error: is forbidden tells you that the user does not have enough permission to do the operation. The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the Operator group. The scope of the operation: cluster scope or not. The user can add the missing permission to the service account and then iterate. Note Operator Lifecycle Manager (OLM) does not currently provide the complete list of errors on the first try. 4.8. Managing custom catalogs This guide describes how to work with custom catalogs for Operators packaged using either the Bundle Format or the legacy Package Manifest Format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform. Additional resources Red Hat-provided Operator catalogs 4.8.1. Custom catalogs using the Bundle Format 4.8.1.1. Prerequisites Install the opm CLI . 4.8.1.2. Creating an index image You can create an index image using the opm CLI. Prerequisites opm version 1.12.3+ podman version 1.9.3+ A bundle image built and pushed to a registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Procedure Start a new index: USD opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \ 2 [--binary-image <registry_base_image>] 3 1 Comma-separated list of bundle images to add to the index. 2 The image tag that you want the index image to have. 3 Optional: An alternative registry base image to use for serving the catalog. Push the index image to a registry. If required, authenticate with your target registry: USD podman login <registry> Push the index image: USD podman push <registry>/<namespace>/test-catalog:latest 4.8.1.3. Creating a catalog from an index image You can create an Operator catalog from an index image and apply it to an OpenShift Container Platform cluster for use with Operator Lifecycle Manager (OLM). Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: <registry>:<port>/<namespace>/redhat-operator-index:v4.7 2 displayName: My Operator Catalog publisher: <publisher_name> 3 updateStrategy: registryPoll: 4 interval: 30m 1 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 2 Specify your index image. 3 Specify your name or an organization name publishing the catalog. 4 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources If your index image is hosted on a private registry and requires authentication, see Accessing images for Operators from private registries . 4.8.1.4. Updating an index image After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image. You can update an existing index image using the opm index add command. Prerequisites opm version 1.12.3+ podman version 1.9.3+ An index image built and pushed to a registry. An existing catalog source referencing the index image. Procedure Update the existing index by adding bundle images: USD opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \ 3 --pull-tool podman 4 1 The --bundles flag specifies a comma-separated list of additional bundle images to add to the index. 2 The --from-index flag specifies the previously pushed index. 3 The --tag flag specifies the image tag to apply to the updated index image. 4 The --pull-tool flag specifies the tool used to pull container images. where: <registry> Specifies the hostname of the registry, such as quay.io or mirror.example.com . <namespace> Specifies the namespace of the registry, such as ocs-dev or abc . <new_bundle_image> Specifies the new bundle image to add to the registry, such as ocs-operator . <digest> Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 . <existing_index_image> Specifies the previously pushed image, such as abc-redhat-operator-index . <existing_tag> Specifies a previously pushed image tag, such as 4.7 . <updated_tag> Specifies the image tag to apply to the updated index image, such as 4.7.1 . Example command USD opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.7 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.7.1 \ --pull-tool podman Push the updated index image: USD podman push <registry>/<namespace>/<existing_index_image>:<updated_tag> After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added: USD oc get packagemanifests -n openshift-marketplace 4.8.1.5. Pruning an index image An index image, based on the Operator Bundle Format, is a containerized snapshot of an Operator catalog. You can prune an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want. Prerequisites podman version 1.9.3+ grpcurl (third-party command-line tool) opm version 1.18.0+ Access to a registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Procedure Authenticate with your target registry: USD podman login <target_registry> Determine the list of packages you want to include in your pruned index. Run the source index image that you want to prune in a container. For example: USD podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.7 Example output Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.7... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051 In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index: USD grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example: Example snippets of packages list ... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ... In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process. Run the following command to prune the source index of all but the specified packages: USD opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.7 \ 1 -p advanced-cluster-management,jaeger-product,quay-operator \ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.7] \ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 4 1 Index to prune. 2 Comma-separated list of packages to keep. 3 Required only for IBM Power Systems and IBM Z images: Operator Registry base image with the tag that matches the target OpenShift Container Platform cluster major and minor version. 4 Custom tag for new index image being built. Run the following command to push the new index image to your target registry: USD podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 where <namespace> is any existing namespace on the registry. 4.8.2. Custom catalogs using the Package Manifest Format 4.8.2.1. Building a Package Manifest Format catalog image Cluster administrators can build a custom Operator catalog image based on the Package Manifest Format to be used by Operator Lifecycle Manager (OLM). The catalog image can be pushed to a container image registry that supports Docker v2-2 . For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation. Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. For this example, the procedure assumes use of a mirror registry that has access to both your network and the Internet. Note Only the Linux version of the oc client can be used for this procedure, because the Windows and macOS versions do not provide the oc adm catalog build command. Prerequisites Workstation with unrestricted network access oc version 4.3.5+ Linux client podman version 1.9.3+ Access to mirror registry that supports Docker v2-2 If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials: USD AUTH_TOKEN=USD(curl -sH "Content-Type: application/json" \ -XPOST https://quay.io/cnr/api/v1/users/login -d ' { "user": { "username": "'"<quay_username>"'", "password": "'"<quay_password>"'" } }' | jq -r '.token') Procedure On the workstation with unrestricted network access, authenticate with the target mirror registry: USD podman login <registry_host_name> Authenticate with registry.redhat.io so that the base image can be pulled during the build: USD podman login registry.redhat.io Build a catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry: USD oc adm catalog build \ --appregistry-org redhat-operators \ 1 --from=registry.redhat.io/openshift4/ose-operator-registry:v4.7 \ 2 --filter-by-os="linux/amd64" \ 3 --to=<registry_host_name>:<port>/olm/redhat-operators:v1 \ 4 [-a USD{REG_CREDS}] \ 5 [--insecure] \ 6 [--auth-token "USD{AUTH_TOKEN}"] 7 1 Organization (namespace) to pull from an App Registry instance. 2 Set --from to the Operator Registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version. 3 Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64 , linux/ppc64le , and linux/s390x . 4 Name your catalog image and include a tag, for example, v1 . 5 Optional: If required, specify the location of your registry credentials file. 6 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 7 Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token. Example output INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 ... Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1 Sometimes invalid manifests are accidentally introduced catalogs provided by Red Hat; when this happens, you might see some errors: Example output with errors ... INFO[0014] directory dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package W1114 19:42:37.876180 34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found Uploading ... 244.9kB/s These errors are usually non-fatal, and if the Operator package mentioned does not contain an Operator you plan to install or a dependency of one, then they can be ignored. Additional resources Mirroring images for a disconnected installation 4.8.2.2. Mirroring a Package Manifest Format catalog image Cluster administrators can mirror a custom Operator catalog image based on the Package Manifest Format into a registry and use a catalog source to load the content onto their cluster. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry. Prerequisites Workstation with unrestricted network access A custom Operator catalog image based on the Package Manifest Format pushed to a supported registry oc version 4.3.5+ podman version 1.9.3+ Access to mirror registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json Procedure The oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring. You can choose to either: Allow the default behavior of the command to automatically mirror all of the image content to your mirror registry after generating manifests, or Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to a registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of the content. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step. On your workstation with unrestricted network access, run the following command: USD oc adm catalog mirror \ <registry_host_name>:<port>/olm/redhat-operators:v1 \ 1 <registry_host_name>:<port> \ 2 [-a USD{REG_CREDS}] \ 3 [--insecure] \ 4 [--index-filter-by-os='<platform>/<arch>'] \ 5 [--manifests-only] 6 1 Specify your Operator catalog image. 2 Specify the fully qualified domain name (FQDN) for the target registry. 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the catalog image to select when multiple variants are available. Images are passed as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the catalog image. Valid values are linux/amd64 , linux/ppc64le , and linux/s390x . 6 Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry. Example output using database path mapping: /:/tmp/190214037 wrote database to /tmp/190214037 using database at: /tmp/190214037/bundles.db 1 ... 1 Temporary database generated by the command. After running the command, a manifests-<index_image_name>-<random_number>/ directory is created in the current directory and generates the following files: The catalogSource.yaml file is a basic definition for a CatalogSource object that is pre-populated with your catalog image tag and other relevant metadata. This file can be used as is or modified to add the catalog source to your cluster. The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration. If you used the --manifests-only flag in the step and want to mirror only a subset of the content: Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them: Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file. For example, to retrieve a list of images that are similar to the string clusterlogging.4.3 : USD echo "select * from related_image \ where operatorbundle_name like 'clusterlogging.4.3%';" \ | sqlite3 -line /tmp/190214037/bundles.db 1 1 Refer to the output of the oc adm catalog mirror command to find the path of the database file. Example output image = registry.redhat.io/openshift-logging/kibana6-rhel8@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61 operatorbundle_name = clusterlogging.4.3.33-202008111029.p0 image = registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506 operatorbundle_name = clusterlogging.4.3.33-202008111029.p0 ... Use the results from the step to edit the mapping.txt file to only include the subset of images you want to mirror. For example, you can use the image values from the example output to find that the following matching lines exist in your mapping.txt file: Matching image mappings in mapping.txt registry.redhat.io/openshift-logging/kibana6-rhel8@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61=<registry_host_name>:<port>/kibana6-rhel8:a767c8f0 registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506=<registry_host_name>:<port>/openshift4-ose-oauth-proxy:3754ea2b In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above two lines. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command: USD oc image mirror \ [-a USD{REG_CREDS}] \ --filter-by-os='.*' \ -f ./manifests-redhat-operators-<random_number>/mapping.txt Warning If the --filter-by-os flag remains unset or set to any value other than .* , the command filters out different architectures, which changes the digest of the manifest list, also known as a multi-arch image . The incorrect digest causes deployments of those images and Operators on disconnected clusters to fail. Create the ImageContentSourcePolicy object: USD oc create -f ./manifests-redhat-operators-<random_number>/imageContentSourcePolicy.yaml You can now create a CatalogSource object to reference your mirrored content. Additional resources Architecture and operating system support for Operators If your catalog image is hosted on a private registry and requires authentication, see Accessing images for Operators from private registries . 4.8.2.3. Updating a Package Manifest Format catalog image After a cluster administrator has configured OperatorHub to use custom Operator catalog images, administrators can keep their OpenShift Container Platform cluster up to date with the latest Operators by capturing updates made to App Registry catalogs provided by Red Hat. This is done by building and pushing a new Operator catalog image, then replacing the existing spec.image parameter in the CatalogSource object with the new image digest. For this example, the procedure assumes a custom redhat-operators catalog image is already configured for use with OperatorHub. Note Only the Linux version of the oc client can be used for this procedure, because the Windows and macOS versions do not provide the oc adm catalog build command. Prerequisites Workstation with unrestricted network access oc version 4.3.5+ Linux client podman version 1.9.3+ Access to mirror registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. OperatorHub configured to use custom catalog images If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials: USD AUTH_TOKEN=USD(curl -sH "Content-Type: application/json" \ -XPOST https://quay.io/cnr/api/v1/users/login -d ' { "user": { "username": "'"<quay_username>"'", "password": "'"<quay_password>"'" } }' | jq -r '.token') Procedure On the workstation with unrestricted network access, authenticate with the target mirror registry: USD podman login <registry_host_name> Authenticate with registry.redhat.io so that the base image can be pulled during the build: USD podman login registry.redhat.io Build a new catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry: USD oc adm catalog build \ --appregistry-org redhat-operators \ 1 --from=registry.redhat.io/openshift4/ose-operator-registry:v4.7 \ 2 --filter-by-os="linux/amd64" \ 3 --to=<registry_host_name>:<port>/olm/redhat-operators:v2 \ 4 [-a USD{REG_CREDS}] \ 5 [--insecure] \ 6 [--auth-token "USD{AUTH_TOKEN}"] 7 1 Organization (namespace) to pull from an App Registry instance. 2 Set --from to the Operator Registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version. 3 Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64 , linux/ppc64le , and linux/s390x . 4 Name your catalog image and include a tag, for example, v2 because it is the updated catalog. 5 Optional: If required, specify the location of your registry credentials file. 6 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 7 Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token. Example output INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 ... Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v2 Mirror the contents of your catalog to your target registry. The following oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring and mirrors the images to your registry: USD oc adm catalog mirror \ <registry_host_name>:<port>/olm/redhat-operators:v2 \ 1 <registry_host_name>:<port> \ 2 [-a USD{REG_CREDS}] \ 3 [--insecure] \ 4 [--index-filter-by-os='<platform>/<arch>'] 5 1 Specify your new Operator catalog image. 2 Specify the fully qualified domain name (FQDN) for the target registry. 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the catalog image to select when multiple variants are available. Images are passed as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the catalog image. Valid values are linux/amd64 , linux/ppc64le , and linux/s390x . Apply the newly generated manifests: USD oc replace -f ./manifests-redhat-operators-<random_number> Important It is possible that you do not need to apply the imageContentSourcePolicy.yaml manifest. Complete a diff of the files to determine if changes are necessary. Update your CatalogSource object that references your catalog image. If you have your original catalogsource.yaml file for this CatalogSource object: Edit your catalogsource.yaml file to reference your new catalog image in the spec.image field: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: <registry_host_name>:<port>/olm/redhat-operators:v2 1 displayName: My Operator Catalog publisher: grpc 1 Specify your new Operator catalog image. Use the updated file to replace the CatalogSource object: USD oc replace -f catalogsource.yaml Alternatively, edit the catalog source using the following command and reference your new catalog image in the spec.image parameter: USD oc edit catalogsource <catalog_source_name> -n openshift-marketplace Updated Operators should now be available from the OperatorHub page on your OpenShift Container Platform cluster. Additional resources Architecture and operating system support for Operators 4.8.2.4. Testing a Package Manifest Format catalog image You can validate Operator catalog image content by running it as a container and querying its gRPC API. To further test the image, you can then resolve a subscription in Operator Lifecycle Manager (OLM) by referencing the image in a catalog source. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry. Prerequisites A custom Package Manifest Format catalog image pushed to a supported registry podman version 1.9.3+ oc version 4.3.5+ Access to mirror registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. grpcurl Procedure Pull the Operator catalog image: USD podman pull <registry_host_name>:<port>/olm/redhat-operators:v1 Run the image: USD podman run -p 50051:50051 \ -it <registry_host_name>:<port>/olm/redhat-operators:v1 Query the running image for available packages using grpcurl : USD grpcurl -plaintext localhost:50051 api.Registry/ListPackages Example output { "name": "3scale-operator" } { "name": "amq-broker" } { "name": "amq-online" } Get the latest Operator bundle in a channel: USD grpcurl -plaintext -d '{"pkgName":"kiali-ossm","channelName":"stable"}' localhost:50051 api.Registry/GetBundleForChannel Example output { "csvName": "kiali-operator.v1.0.7", "packageName": "kiali-ossm", "channelName": "stable", ... Get the digest of the image: USD podman inspect \ --format='{{index .RepoDigests 0}}' \ <registry_host_name>:<port>/olm/redhat-operators:v1 Example output example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 Assuming an Operator group exists in namespace my-ns that supports your Operator and its dependencies, create a CatalogSource object using the image digest. For example: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: custom-redhat-operators namespace: my-ns spec: sourceType: grpc image: example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 displayName: Red Hat Operators Create a subscription that resolves the latest available servicemeshoperator and its dependencies from your catalog image: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: servicemeshoperator namespace: my-ns spec: source: custom-redhat-operators sourceNamespace: my-ns name: servicemeshoperator channel: "1.0" 4.8.3. Accessing images for Operators from private registries If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation. Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access. The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access: Index or catalog images A CatalogSource object can reference an index image or a catalog image, which are catalog sources packaged as container images hosted in images registries. Index images use the Bundle Format and reference bundle images, while catalog images use the Package Manifest Format. If an index or catalog image is hosted in a private registry, a secret can be used to enable pull access. Bundle images Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access. Operator and Operand images If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed. Instead, the authentication details can be added to the global cluster pull secret in the openshift-config namespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to the default service accounts of the target tenant namespaces. Prerequisites At least one of the following hosted in a private registry: An index image or catalog image. An Operator bundle image. An Operator or Operand image. Procedure Create a secret for each required private registry. Log in to the private registry to create or update your registry credentials file: USD podman login <registry>:<port> Note The file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the podman CLI, the default location is USD{XDG_RUNTIME_DIR}/containers/auth.json . For the docker CLI, the default location is /root/.docker/config.json . It is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a CatalogSource object in later steps, and OpenShift Container Platform will merge the secrets into a single virtual credentials file for use during an image pull. A registry credentials file can, by default, store details for more than one registry. Verify the current contents of your file. For example: File storing credentials for two registries { "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" }, "quay.io": { "auth": "Xd2lhdsbnRib21iMQ==" } } } Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods: Use the podman logout <registry> command to remove credentials for additional registries until only the one registry you want remains. Edit your registry credentials file and separate the registry details to be stored in multiple files. For example: File storing credentials for one registry { "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" } } } File storing credentials for another registry { "auths": { "quay.io": { "auth": "Xd2lhdsbnRib21iMQ==" } } } Create a secret in the openshift-marketplace namespace that contains the authentication credentials for a private registry: USD oc create secret generic <secret_name> \ -n openshift-marketplace \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson Repeat this step to create additional secrets for any other required private registries, updating the --from-file flag to specify another registry credentials file path. Create or update an existing CatalogSource object to reference one or more secrets: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - "<secret_name_1>" - "<secret_name_2>" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m 1 Add a spec.secrets section and specify any required secrets. If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces. To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the openshift-config namespace. Warning Cluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster. Extract the .dockerconfigjson file from the global pull secret: USD oc extract secret/pull-secret -n openshift-config --confirm Update the .dockerconfigjson file with your authentication credentials for the required private registry or registries and save it as a new file: USD cat .dockerconfigjson | \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \ 1 > new_dockerconfigjson 1 Replace <registry>:<port>/<namespace> with the private registry details and <token> with your authentication credentials. Update the global pull secret with the new file: USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=new_dockerconfigjson To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace. Recreate the secret that you created for the openshift-marketplace in the tenant namespace: USD oc create secret generic <secret_name> \ -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson Verify the name of the service account for the Operator by searching the tenant namespace: USD oc get sa -n <tenant_namespace> 1 1 If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the openshift-operators namespace. Example output NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1 1 Service account for an installed etcd Operator. Link the secret to the service account for the Operator: USD oc secrets link <operator_sa> \ -n <tenant_namespace> \ <secret_name> \ --for=pull Additional resources See What is a secret? for more information on the types of secrets, including those used for registry credentials. See Updating the global cluster pull secret for more details on the impact of changing this secret. See Allowing pods to reference images from other secured registries for more details on linking pull secrets to service accounts per namespace. 4.8.4. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. As a cluster administrator, you can disable the set of default catalogs. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 4.8.5. Removing custom catalogs As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source. Procedure In the Administrator perspective of the web console, navigate to Administration Cluster Settings . Click the Global Configuration tab, and then click OperatorHub . Click the Sources tab. Select the Options menu for the catalog that you want to remove, and then click Delete CatalogSource . 4.9. Using Operator Lifecycle Manager on restricted networks For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters , Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full Internet connectivity. However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full Internet access. The workstation, which requires full Internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry. The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped , host, which requires removable media to physically move the mirrored content to the disconnected environment. This guide describes the following process that is required to enable OLM in restricted networks: Disable the default remote OperatorHub sources for OLM. Use a workstation with full Internet access to create and push local mirrors of the OperatorHub content to a mirror registry. Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources. After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released. Important While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself. The Operator must: List any related images, or other container images that the Operator might require to perform their functions, in the relatedImages parameter of its ClusterServiceVersion (CSV) object. Reference all specified images by a digest (SHA) and not by a tag. See the following Red Hat Knowledgebase Article for a list of Red Hat Operators that support running in disconnected mode: https://access.redhat.com/articles/4740011 Additional resources Red Hat-provided Operator catalogs Enabling your Operator for restricted network environments 4.9.1. Prerequisites Log in to your OpenShift Container Platform cluster as a user with cluster-admin privileges. If you want to prune the default catalog and selectively mirror only a subset of Operators, install the opm CLI . Note If you are using OLM in a restricted network on IBM Z, you must have at least 12 GB allocated to the directory where you place your registry. 4.9.2. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 4.9.3. Pruning an index image An index image, based on the Operator Bundle Format, is a containerized snapshot of an Operator catalog. You can prune an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want. When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs. For the steps in this procedure, the target registry is an existing mirror registry that is accessible by your workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators catalog, but the process is the same for any index image. Prerequisites Workstation with unrestricted network access podman version 1.9.3+ grpcurl (third-party command-line tool) opm version 1.18.0+ Access to a registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Procedure Authenticate with registry.redhat.io : USD podman login registry.redhat.io Authenticate with your target registry: USD podman login <target_registry> Determine the list of packages you want to include in your pruned index. Run the source index image that you want to prune in a container. For example: USD podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.7 Example output Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.7... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051 In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index: USD grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example: Example snippets of packages list ... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ... In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process. Run the following command to prune the source index of all but the specified packages: USD opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.7 \ 1 -p advanced-cluster-management,jaeger-product,quay-operator \ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.7] \ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 4 1 Index to prune. 2 Comma-separated list of packages to keep. 3 Required only for IBM Power Systems and IBM Z images: Operator Registry base image with the tag that matches the target OpenShift Container Platform cluster major and minor version. 4 Custom tag for new index image being built. Run the following command to push the new index image to your target registry: USD podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to. 4.9.4. Mirroring an Operator catalog You can mirror the Operator content of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2 . For a cluster on a restricted network, this registry can be one that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation. Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. The oc adm catalog mirror command also automatically mirrors the index image that is specified during the mirroring process, whether it be a Red Hat-provided index image or your own custom-built index image, to the target registry. You can then use the mirrored index image to create a catalog source that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster. Prerequisites Workstation with unrestricted network access. podman version 1.9.3 or later. Access to mirror registry that supports Docker v2-2 . Decide which namespace on your mirror registry you will use to store the mirrored Operator content. For example, you might create an olm-mirror namespace. If your mirror registry does not have Internet access, connect removable media to your workstation with unrestricted network access. If you are working with private registries, including registry.redhat.io , set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json Procedure If you want to mirror a Red Hat-provided catalog, run the following command on your workstation with unrestricted network access to authenticate with registry.redhat.io : USD podman login registry.redhat.io The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. The default behavior of the command generates manifests, then automatically mirrors all of the image content from the index image, as well as the index image itself, to your mirror registry. Alternatively, if your mirror registry is on a completely disconnected, or airgapped , host, you can first mirror the content to removable media, move the media to the disconnected environment, then mirror the content from the media to the registry. Option A: If your mirror registry is on the same network as your workstation with unrestricted network access, take the following actions on your workstation: If your mirror registry requires authentication, run the following command to log in to the registry: USD podman login <mirror_registry> Run the following command to mirror the content: USD oc adm catalog mirror \ <index_image> \ 1 <mirror_registry>:<port>/<namespace> \ 2 [-a USD{REG_CREDS}] \ 3 [--insecure] \ 4 [--index-filter-by-os='<platform>/<arch>'] \ 5 [--manifests-only] 6 1 Specify the index image for the catalog you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.7 . 2 Specify the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to. 3 Optional: If required, specify the location of your registry credentials file. {REG_CREDS} is required for registry.redhat.io . 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are passed as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , and linux/s390x . 6 Optional: Generate only the manifests required for mirroring, and do not actually mirror the image content to a registry. This option can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you require only a subset of packages. You can then use the mapping.txt file with the oc image mirror command to mirror the modified list of images in a later step. This flag is intended for only advanced selective mirroring of content from the catalog; the opm index prune command, if you used it previously to prune the index image, is suitable for most catalog management use cases. Example output src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 ... wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2 1 Directory for the temporary index.db database generated by the command. 2 Be sure to record the manifests directory name that is generated. This directory name is used in a later step. Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution article. Option B: If your mirror registry is on a disconnected host, take the following actions. Run the following command on your workstation with unrestricted network access to mirror the content to local files: USD oc adm catalog mirror \ <index_image> \ 1 file:///local/index \ 2 [-a USD{REG_CREDS}] \ [--insecure] 1 Specify the index image for the catalog you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.7 . 2 Mirrors content to local files in your current directory. Example output ... info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2 1 Be sure to record the manifests directory name that is generated. This directory name is used in a later step. 2 Record the expanded file:// path that based on your provided index image. This path is used in a later step. Copy the v2/ directory that is generated in your current directory to removable media. Physically remove the media and attach it to a host in the disconnected environment that has access to the mirror registry. If your mirror registry requires authentication, run the following command on your host in the disconnected environment to log in to the registry: USD podman login <mirror_registry> Run the following command from the parent directory containing the v2/ directory to upload the images from local files to the mirror registry: USD oc adm catalog mirror \ file://local/index/<repo>/<index_image>:<tag> \ 1 <mirror_registry>:<port>/<namespace> \ 2 [-a USD{REG_CREDS}] \ [--insecure] 1 Specify the file:// path from the command output. 2 Specify the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to. Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution article. Run the oc adm catalog mirror command again. Use the newly mirrored index image as the source and the same mirror registry namespace used in the step as the target: USD oc adm catalog mirror \ <mirror_registry>:<port>/<index_image> \ <mirror_registry>:<port>/<namespace> \ --manifests-only \ 1 [-a USD{REG_CREDS}] \ [--insecure] 1 The --manifests-only flag is required for this step so that the command does not copy all of the mirrored content again. Important This step is required because the image mappings in the imageContentSourcePolicy.yaml file generated during the step must be updated from local paths to valid mirror locations. Failure to do so will cause errors when you create the ImageContentSourcePolicy object in a later step. After mirroring the content to your registry, inspect the manifests directory that is generated in your current directory. Note The manifests directory name is used in a later step. If you mirrored content to a registry on the same network in the step, the directory name takes the following form: manifests-<index_image_name>-<random_number> If you mirrored content to a registry on a disconnected host in the step, the directory name takes the following form: manifests-index/<namespace>/<index_image_name>-<random_number> The manifests directory contains the following files, some of which might require further modification: The catalogSource.yaml file is a basic definition for a CatalogSource object that is pre-populated with your index image tag and other relevant metadata. This file can be used as is or modified to add the catalog source to your cluster. Important If you mirrored the content to local files, you must modify your catalogSource.yaml file to remove any backslash ( / ) characters from the metadata.name field. Otherwise, when you attempt to create the object, it fails with an "invalid resource name" error. The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration. Important If you used the --manifests-only flag during the mirroring process and want to further trim the subset of packages to be mirrored, see the steps in the "Mirroring a Package Manifest Format catalog image" procedure about modifying your mapping.txt file and using the file with the oc image mirror command. After following those further actions, you can continue this procedure. On a host with access to the disconnected cluster, create the ImageContentSourcePolicy (ICSP) object by running the following command to specify the imageContentSourcePolicy.yaml file in your manifests directory: USD oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml where <path/to/manifests/dir> is the path to the manifests directory for your mirrored content. Note Applying the ICSP causes all worker nodes in the cluster to restart. You must wait for this reboot process to finish cycling through each of your worker nodes before proceeding. You can now create a CatalogSource object to reference your mirrored index image and Operator content. Additional resources Mirroring images for a disconnected installation Architecture and operating system support for Operators Mirroring a Package Manifest Format catalog image 4.9.5. Creating a catalog from an index image You can create an Operator catalog from an index image and apply it to an OpenShift Container Platform cluster for use with Operator Lifecycle Manager (OLM). Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>:<port>/<namespace>/redhat-operator-index:v4.7 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify your index image. 4 Specify your name or an organization name publishing the catalog. 5 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources If your index image is hosted on a private registry and requires authentication, see Accessing images for Operators from private registries . 4.9.6. Updating an index image After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image. You can update an existing index image using the opm index add command. For restricted networks, the updated content must also be mirrored again to the cluster. Prerequisites opm version 1.12.3+ podman version 1.9.3+ An index image built and pushed to a registry. An existing catalog source referencing the index image. Procedure Update the existing index by adding bundle images: USD opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \ 3 --pull-tool podman 4 1 The --bundles flag specifies a comma-separated list of additional bundle images to add to the index. 2 The --from-index flag specifies the previously pushed index. 3 The --tag flag specifies the image tag to apply to the updated index image. 4 The --pull-tool flag specifies the tool used to pull container images. where: <registry> Specifies the hostname of the registry, such as quay.io or mirror.example.com . <namespace> Specifies the namespace of the registry, such as ocs-dev or abc . <new_bundle_image> Specifies the new bundle image to add to the registry, such as ocs-operator . <digest> Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 . <existing_index_image> Specifies the previously pushed image, such as abc-redhat-operator-index . <existing_tag> Specifies a previously pushed image tag, such as 4.7 . <updated_tag> Specifies the image tag to apply to the updated index image, such as 4.7.1 . Example command USD opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.7 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.7.1 \ --pull-tool podman Push the updated index image: USD podman push <registry>/<namespace>/<existing_index_image>:<updated_tag> Follow the steps in the Mirroring an Operator catalog procedure again to mirror the updated content. However, when you get to the step about creating the ImageContentSourcePolicy (ICSP) object, use the oc replace command instead of the oc create command. For example: USD oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml This change is required because the object already exists and must be updated. Note Normally, the oc apply command can be used to update existing objects that were previously created using oc apply . However, due to a known issue regarding the size of the metadata.annotations field in ICSP objects, the oc replace command must be used for this step currently. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added: USD oc get packagemanifests -n openshift-marketplace Additional resources Mirroring an Operator catalog | [
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2",
"oc apply -f sub.yaml",
"oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV",
"currentCSV: jaeger-operator.v1.8.2",
"oc delete subscription jaeger -n openshift-operators",
"subscription.operators.coreos.com \"jaeger\" deleted",
"oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators",
"clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" status: conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/test-catalog:latest",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: <registry>:<port>/<namespace>/redhat-operator-index:v4.7 2 displayName: My Operator Catalog publisher: <publisher_name> 3 updateStrategy: registryPoll: 4 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.7 --tag mirror.example.com/abc/abc-redhat-operator-index:4.7.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.7",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.7 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.7 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.7] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"AUTH_TOKEN=USD(curl -sH \"Content-Type: application/json\" -XPOST https://quay.io/cnr/api/v1/users/login -d ' { \"user\": { \"username\": \"'\"<quay_username>\"'\", \"password\": \"'\"<quay_password>\"'\" } }' | jq -r '.token')",
"podman login <registry_host_name>",
"podman login registry.redhat.io",
"oc adm catalog build --appregistry-org redhat-operators \\ 1 --from=registry.redhat.io/openshift4/ose-operator-registry:v4.7 \\ 2 --filter-by-os=\"linux/amd64\" \\ 3 --to=<registry_host_name>:<port>/olm/redhat-operators:v1 \\ 4 [-a USD{REG_CREDS}] \\ 5 [--insecure] \\ 6 [--auth-token \"USD{AUTH_TOKEN}\"] 7",
"INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1",
"INFO[0014] directory dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package W1114 19:42:37.876180 34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found Uploading ... 244.9kB/s",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"oc adm catalog mirror <registry_host_name>:<port>/olm/redhat-operators:v1 \\ 1 <registry_host_name>:<port> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"using database path mapping: /:/tmp/190214037 wrote database to /tmp/190214037 using database at: /tmp/190214037/bundles.db 1",
"echo \"select * from related_image where operatorbundle_name like 'clusterlogging.4.3%';\" | sqlite3 -line /tmp/190214037/bundles.db 1",
"image = registry.redhat.io/openshift-logging/kibana6-rhel8@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61 operatorbundle_name = clusterlogging.4.3.33-202008111029.p0 image = registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506 operatorbundle_name = clusterlogging.4.3.33-202008111029.p0",
"registry.redhat.io/openshift-logging/kibana6-rhel8@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61=<registry_host_name>:<port>/kibana6-rhel8:a767c8f0 registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506=<registry_host_name>:<port>/openshift4-ose-oauth-proxy:3754ea2b",
"oc image mirror [-a USD{REG_CREDS}] --filter-by-os='.*' -f ./manifests-redhat-operators-<random_number>/mapping.txt",
"oc create -f ./manifests-redhat-operators-<random_number>/imageContentSourcePolicy.yaml",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"AUTH_TOKEN=USD(curl -sH \"Content-Type: application/json\" -XPOST https://quay.io/cnr/api/v1/users/login -d ' { \"user\": { \"username\": \"'\"<quay_username>\"'\", \"password\": \"'\"<quay_password>\"'\" } }' | jq -r '.token')",
"podman login <registry_host_name>",
"podman login registry.redhat.io",
"oc adm catalog build --appregistry-org redhat-operators \\ 1 --from=registry.redhat.io/openshift4/ose-operator-registry:v4.7 \\ 2 --filter-by-os=\"linux/amd64\" \\ 3 --to=<registry_host_name>:<port>/olm/redhat-operators:v2 \\ 4 [-a USD{REG_CREDS}] \\ 5 [--insecure] \\ 6 [--auth-token \"USD{AUTH_TOKEN}\"] 7",
"INFO[0013] loading Bundles dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v2",
"oc adm catalog mirror <registry_host_name>:<port>/olm/redhat-operators:v2 \\ 1 <registry_host_name>:<port> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] 5",
"oc replace -f ./manifests-redhat-operators-<random_number>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc image: <registry_host_name>:<port>/olm/redhat-operators:v2 1 displayName: My Operator Catalog publisher: grpc",
"oc replace -f catalogsource.yaml",
"oc edit catalogsource <catalog_source_name> -n openshift-marketplace",
"podman pull <registry_host_name>:<port>/olm/redhat-operators:v1",
"podman run -p 50051:50051 -it <registry_host_name>:<port>/olm/redhat-operators:v1",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages",
"{ \"name\": \"3scale-operator\" } { \"name\": \"amq-broker\" } { \"name\": \"amq-online\" }",
"grpcurl -plaintext -d '{\"pkgName\":\"kiali-ossm\",\"channelName\":\"stable\"}' localhost:50051 api.Registry/GetBundleForChannel",
"{ \"csvName\": \"kiali-operator.v1.0.7\", \"packageName\": \"kiali-ossm\", \"channelName\": \"stable\",",
"podman inspect --format='{{index .RepoDigests 0}}' <registry_host_name>:<port>/olm/redhat-operators:v1",
"example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: custom-redhat-operators namespace: my-ns spec: sourceType: grpc image: example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 displayName: Red Hat Operators",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: servicemeshoperator namespace: my-ns spec: source: custom-redhat-operators sourceNamespace: my-ns name: servicemeshoperator channel: \"1.0\"",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"podman login registry.redhat.io",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.7",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.7 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.7 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.7] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.7",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"podman login registry.redhat.io",
"podman login <mirror_registry>",
"oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2",
"oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 [-a USD{REG_CREDS}] [--insecure]",
"info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2",
"podman login <mirror_registry>",
"oc adm catalog mirror file://local/index/<repo>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 [-a USD{REG_CREDS}] [--insecure]",
"oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>/<namespace> --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]",
"manifests-<index_image_name>-<random_number>",
"manifests-index/<namespace>/<index_image_name>-<random_number>",
"oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>:<port>/<namespace>/redhat-operator-index:v4.7 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.7 --tag mirror.example.com/abc/abc-redhat-operator-index:4.7.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml",
"oc get packagemanifests -n openshift-marketplace"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/operators/administrator-tasks |
22.2. The Text Mode Installation Program User Interface | 22.2. The Text Mode Installation Program User Interface Both the loader and later anaconda use a screen-based interface that includes most of the on-screen widgets commonly found on graphical user interfaces. Figure 22.1, "Installation Program Widgets as seen in URL Setup " , and Figure 22.2, "Installation Program Widgets as seen in Choose a Language " , illustrate widgets that appear on screens during the installation process. Figure 22.1. Installation Program Widgets as seen in URL Setup Figure 22.2. Installation Program Widgets as seen in Choose a Language Here is a list of the most important widgets shown in Figure 22.1, "Installation Program Widgets as seen in URL Setup " and Figure 22.2, "Installation Program Widgets as seen in Choose a Language " : Window - Windows (usually referred to as dialogs in this manual) appear on your screen throughout the installation process. At times, one window may overlay another; in these cases, you can only interact with the window on top. When you are finished in that window, it disappears, allowing you to continue working in the window underneath. Checkbox - Checkboxes allow you to select or deselect a feature. The box displays either an asterisk (selected) or a space (unselected). When the cursor is within a checkbox, press Space to select or deselect a feature. Text Input - Text input lines are regions where you can enter information required by the installation program. When the cursor rests on a text input line, you may enter and/or edit information on that line. Text Widget - Text widgets are regions of the screen for the display of text. At times, text widgets may also contain other widgets, such as checkboxes. If a text widget contains more information than can be displayed in the space reserved for it, a scroll bar appears; if you position the cursor within the text widget, you can then use the Up and Down arrow keys to scroll through all the information available. Your current position is shown on the scroll bar by a # character, which moves up and down the scroll bar as you scroll. Scroll Bar - Scroll bars appear on the side or bottom of a window to control which part of a list or document is currently in the window's frame. The scroll bar makes it easy to move to any part of a file. Button Widget - Button widgets are the primary method of interacting with the installation program. You progress through the windows of the installation program by navigating these buttons, using the Tab and Enter keys. Buttons can be selected when they are highlighted. Cursor - Although not a widget, the cursor is used to select (and interact with) a particular widget. As the cursor is moved from widget to widget, it may cause the widget to change color, or the cursor itself may only appear positioned in or to the widget. In Figure 22.1, "Installation Program Widgets as seen in URL Setup " , the cursor is positioned on the Enable HTTP proxy checkbox. Figure 8.2, "Installation Program Widgets as seen in Choose a Language " , shows the cursor on the OK button. 22.2.1. Using the Keyboard to Navigate Navigation through the installation dialogs is performed through a simple set of keystrokes. To move the cursor, use the Left , Right , Up , and Down arrow keys. Use Tab , and Shift - Tab to cycle forward or backward through each widget on the screen. Along the bottom, most screens display a summary of available cursor positioning keys. To "press" a button, position the cursor over the button (using Tab , for example) and press Space or Enter . To select an item from a list of items, move the cursor to the item you wish to select and press Enter . To select an item with a checkbox, move the cursor to the checkbox and press Space to select an item. To deselect, press Space a second time. Pressing F12 accepts the current values and proceeds to the dialog; it is equivalent to pressing the OK button. Warning Unless a dialog box is waiting for your input, do not press any keys during the installation process (doing so may result in unpredictable behavior). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-guimode-textinterface-s390 |
Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] | Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CustomResourceDefinitionSpec describes how a user wants their resource to appear status object CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition 3.1.1. .spec Description CustomResourceDefinitionSpec describes how a user wants their resource to appear Type object Required group names scope versions Property Type Description conversion object CustomResourceConversion describes how to convert different versions of a CR. group string group is the API group of the defined custom resource. The custom resources are served under /apis/<group>/... . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). names object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition preserveUnknownFields boolean preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting x-preserve-unknown-fields to true in spec.versions[*].schema.openAPIV3Schema . See https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning for details. scope string scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are Cluster and Namespaced . versions array versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. versions[] object CustomResourceDefinitionVersion describes a version for CRD. 3.1.2. .spec.conversion Description CustomResourceConversion describes how to convert different versions of a CR. Type object Required strategy Property Type Description strategy string strategy specifies how custom resources are converted between versions. Allowed values are: - "None" : The converter only change the apiVersion and would not touch any other field in the custom resource. - "Webhook" : API Server will call to an external webhook to do the conversion. Additional information is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set. webhook object WebhookConversion describes how to call a conversion webhook 3.1.3. .spec.conversion.webhook Description WebhookConversion describes how to call a conversion webhook Type object Required conversionReviewVersions Property Type Description clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook. conversionReviewVersions array (string) conversionReviewVersions is an ordered list of preferred ConversionReview versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail. 3.1.4. .spec.conversion.webhook.clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook. Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 3.1.5. .spec.conversion.webhook.clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path at which the webhook will be contacted. port integer port is an optional service port at which the webhook will be contacted. port should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility. 3.1.6. .spec.names Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.7. .spec.versions Description versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. Type array 3.1.8. .spec.versions[] Description CustomResourceDefinitionVersion describes a version for CRD. Type object Required name served storage Property Type Description additionalPrinterColumns array additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. additionalPrinterColumns[] object CustomResourceColumnDefinition specifies a column for server side printing. deprecated boolean deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false. deprecationWarning string deprecationWarning overrides the default warning returned to API clients. May only be set when deprecated is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists. name string name is the version name, e.g. "v1", "v2beta1", etc. The custom resources are served under this version at /apis/<group>/<version>/... if served is true. schema object CustomResourceValidation is a list of validation methods for CustomResources. selectableFields array selectableFields specifies paths to fields that may be used as field selectors. A maximum of 8 selectable fields are allowed. See https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors selectableFields[] object SelectableField specifies the JSON path of a field that may be used with field selectors. served boolean served is a flag enabling/disabling this version from being served via REST APIs storage boolean storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true. subresources object CustomResourceSubresources defines the status and scale subresources for CustomResources. 3.1.9. .spec.versions[].additionalPrinterColumns Description additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. Type array 3.1.10. .spec.versions[].additionalPrinterColumns[] Description CustomResourceColumnDefinition specifies a column for server side printing. Type object Required name type jsonPath Property Type Description description string description is a human readable description of this column. format string format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. jsonPath string jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column. name string name is a human readable name for the column. priority integer priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0. type string type is an OpenAPI type definition for this column. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. 3.1.11. .spec.versions[].schema Description CustomResourceValidation is a list of validation methods for CustomResources. Type object Property Type Description openAPIV3Schema `` openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning. 3.1.12. .spec.versions[].selectableFields Description selectableFields specifies paths to fields that may be used as field selectors. A maximum of 8 selectable fields are allowed. See https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors Type array 3.1.13. .spec.versions[].selectableFields[] Description SelectableField specifies the JSON path of a field that may be used with field selectors. Type object Required jsonPath Property Type Description jsonPath string jsonPath is a simple JSON path which is evaluated against each custom resource to produce a field selector value. Only JSON paths without the array notation are allowed. Must point to a field of type string, boolean or integer. Types with enum values and strings with formats are allowed. If jsonPath refers to absent field in a resource, the jsonPath evaluates to an empty string. Must not point to metdata fields. Required. 3.1.14. .spec.versions[].subresources Description CustomResourceSubresources defines the status and scale subresources for CustomResources. Type object Property Type Description scale object CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. status object CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza 3.1.15. .spec.versions[].subresources.scale Description CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. Type object Required specReplicasPath statusReplicasPath Property Type Description labelSelectorPath string labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale status.selector . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status or .spec . Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the status.selector value in the /scale subresource will default to the empty string. specReplicasPath string specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale spec.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .spec . If there is no value under the given path in the custom resource, the /scale subresource will return an error on GET. statusReplicasPath string statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale status.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status . If there is no value under the given path in the custom resource, the status.replicas value in the /scale subresource will default to 0. 3.1.16. .spec.versions[].subresources.status Description CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza Type object 3.1.17. .status Description CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition Type object Property Type Description acceptedNames object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition conditions array conditions indicate state for particular aspects of a CustomResourceDefinition conditions[] object CustomResourceDefinitionCondition contains details for the current condition of this pod. storedVersions array (string) storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from spec.versions while they exist in this list. 3.1.18. .status.acceptedNames Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.19. .status.conditions Description conditions indicate state for particular aspects of a CustomResourceDefinition Type array 3.1.20. .status.conditions[] Description CustomResourceDefinitionCondition contains details for the current condition of this pod. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. type string type is the type of the condition. Types include Established, NamesAccepted and Terminating. 3.2. API endpoints The following API endpoints are available: /apis/apiextensions.k8s.io/v1/customresourcedefinitions DELETE : delete collection of CustomResourceDefinition GET : list or watch objects of kind CustomResourceDefinition POST : create a CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions GET : watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} DELETE : delete a CustomResourceDefinition GET : read the specified CustomResourceDefinition PATCH : partially update the specified CustomResourceDefinition PUT : replace the specified CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} GET : watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status GET : read status of the specified CustomResourceDefinition PATCH : partially update status of the specified CustomResourceDefinition PUT : replace status of the specified CustomResourceDefinition 3.2.1. /apis/apiextensions.k8s.io/v1/customresourcedefinitions HTTP method DELETE Description delete collection of CustomResourceDefinition Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CustomResourceDefinition Table 3.3. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a CustomResourceDefinition Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 202 - Accepted CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.2. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions HTTP method GET Description watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition HTTP method DELETE Description delete a CustomResourceDefinition Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CustomResourceDefinition Table 3.11. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CustomResourceDefinition Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CustomResourceDefinition Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.4. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition HTTP method GET Description watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status Table 3.19. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition HTTP method GET Description read status of the specified CustomResourceDefinition Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CustomResourceDefinition Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CustomResourceDefinition Table 3.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.24. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.25. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extension_apis/customresourcedefinition-apiextensions-k8s-io-v1 |
Chapter 9. Enforcing Puppet configuration on hosts | Chapter 9. Enforcing Puppet configuration on hosts You can enforce configuration from Satellite either manually on demand (run once) or automatically in configurable intervals. 9.1. Running Puppet once using SSH Assign the proper job template to the Run Puppet Once feature to run Puppet on hosts. Procedure In the Satellite web UI, navigate to Administer > Remote Execution Features . Select the puppet_run_host remote execution feature. Assign the Run Puppet Once - SSH Default job template. Run Puppet on hosts by running a job and selecting category Puppet and template Run Puppet Once - SSH Default . Alternatively, click Run Puppet Once in the Schedule Remote Job drop down menu on the host details page. 9.2. Understanding intervals of automatic enforcement Satellite considers hosts to be out of sync if the last Puppet report is older than the combined values of outofsync_interval and puppet_interval set in minutes. By default, the Puppet agent on your hosts runs every 30 minutes, the puppet_interval is set to 35 minutes and the global outofsync_interval is set to 30 minutes. The effective time after which hosts are considered out of sync is the sum of outofsync_interval and puppet_interval . For example, setting the global outofsync_interval to 30 and the puppet_interval to 60 results in a total of 90 minutes after which the host status changes to out of sync . 9.3. Setting the Puppet agent run interval on a host Set the interval when the Puppet agent runs and sends reports to Satellite. Procedure Connect to your host using SSH. Add the Puppet agent run interval to /etc/puppetlabs/puppet/puppet.conf , for example runinterval = 1h . 9.4. Setting the global out-of-sync interval Procedure In the Satellite web UI, navigate to Administer > Settings . On the General tab, edit Out of sync interval . Set a duration, in minutes, after which hosts are considered to be out of sync. You can also override this interval on host groups or individual hosts by adding the outofsync_interval parameter. 9.5. Setting the Puppet out-of-sync interval Procedure In the Satellite web UI, navigate to Administer > Settings , and click the Config Management tab. In the Puppet interval field, set the value to the duration, in minutes, after which hosts reporting using Puppet are considered to be out of sync. 9.6. Overriding out-of-sync interval for a host group Procedure In the Satellite web UI, navigate to Configure > Host Groups . Select a host group. On the Parameters tab, click Add Parameter . In the Name field, enter outofsync_interval . From the Type dropdown menu, select integer . In the Value field, enter the new interval in minutes. Click Submit . 9.7. Overriding out-of-sync interval for an individual host Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit for a selected host. On the Parameters tab, click Add Parameter . In the Name field, enter outofsync_interval . From the Type dropdown menu, select integer . In the Value field, enter the new interval in minutes. Click Submit . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_puppet_integration/enforcing-puppet-configuration-on-hosts_managing-configurations-puppet |
Chapter 29. Triggering scripts for cluster events | Chapter 29. Triggering scripts for cluster events A Pacemaker cluster is an event-driven system, where an event might be a resource or node failure, a configuration change, or a resource starting or stopping. You can configure Pacemaker cluster alerts to take some external action when a cluster event occurs by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. The cluster passes information about the event to the agent by means of environment variables. Agents can do anything with this information, such as send an email message or log to a file or update a monitoring system. Pacemaker provides several sample alert agents, which are installed in /usr/share/pacemaker/alerts by default. These sample scripts may be copied and used as is, or they may be used as templates to be edited to suit your purposes. Refer to the source code of the sample agents for the full set of attributes they support. If the sample alert agents do not meet your needs, you can write your own alert agents for a Pacemaker alert to call. 29.1. Installing and configuring sample alert agents When you use one of the sample alert agents, you should review the script to ensure that it suits your needs. These sample agents are provided as a starting point for custom scripts for specific cluster environments. Note that while Red Hat supports the interfaces that the alert agents scripts use to communicate with Pacemaker, Red Hat does not provide support for the custom agents themselves. To use one of the sample alert agents, you must install the agent on each node in the cluster. For example, the following command installs the alert_file.sh.sample script as alert_file.sh . After you have installed the script, you can create an alert that uses the script. The following example configures an alert that uses the installed alert_file.sh alert agent to log events to a file. Alert agents run as the user hacluster , which has a minimal set of permissions. This example creates the log file pcmk_alert_file.log that will be used to record the events. It then creates the alert agent and adds the path to the log file as its recipient. The following example installs the alert_snmp.sh.sample script as alert_snmp.sh and configures an alert that uses the installed alert_snmp.sh alert agent to send cluster events as SNMP traps. By default, the script will send all events except successful monitor calls to the SNMP server. This example configures the timestamp format as a meta option. After configuring the alert, this example configures a recipient for the alert and displays the alert configuration. The following example installs the alert_smtp.sh agent and then configures an alert that uses the installed alert agent to send cluster events as email messages. After configuring the alert, this example configures a recipient and displays the alert configuration. 29.2. Creating a cluster alert The following command creates a cluster alert. The options that you configure are agent-specific configuration values that are passed to the alert agent script at the path you specify as additional environment variables. If you do not specify a value for id , one will be generated. Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called on those nodes. The following example creates a simple alert that will call myscript.sh for each event. 29.3. Displaying, modifying, and removing cluster alerts There are a variety of pcs commands you can use to display, modify, and remove cluster alerts. The following command shows all configured alerts along with the values of the configured options. The following command updates an existing alert with the specified alert-id value. The following command removes an alert with the specified alert-id value. Alternately, you can run the pcs alert delete command, which is identical to the pcs alert remove command. Both the pcs alert delete and the pcs alert remove commands allow you to specify more than one alert to be deleted. 29.4. Configuring cluster alert recipients Usually alerts are directed towards a recipient. Thus each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. The recipient may be anything the alert agent can recognize: an IP address, an email address, a file name, or whatever the particular agent supports. The following command adds a new recipient to the specified alert. The following command updates an existing alert recipient. The following command removes the specified alert recipient. Alternately, you can run the pcs alert recipient delete command, which is identical to the pcs alert recipient remove command. Both the pcs alert recipient remove and the pcs alert recipient delete commands allow you to remove more than one alert recipient. The following example command adds the alert recipient my-alert-recipient with a recipient ID of my-recipient-id to the alert my-alert . This will configure the cluster to call the alert script that has been configured for my-alert for each event, passing the recipient some-address as an environment variable. 29.5. Alert meta options As with resource agents, meta options can be configured for alert agents to affect how Pacemaker calls them. The following table describes the alert meta options. Meta options can be configured per alert agent as well as per recipient. Table 29.1. Alert Meta Options Meta-Attribute Default Description enabled true (RHEL 9.3 and later) If set to false for an alert, the alert will not be used. If set to true for an alert and false for a particular recipient of that alert, that recipient will not be used. timestamp-format %H:%M:%S.%06N Format the cluster will use when sending the event's timestamp to the agent. This is a string as used with the date (1) command. timeout 30s If the alert agent does not complete within this amount of time, it will be terminated. The following example configures an alert that calls the script myscript.sh and then adds two recipients to the alert. The first recipient has an ID of my-alert-recipient1 and the second recipient has an ID of my-alert-recipient2 . The script will get called twice for each event, with each call using a 15-second timeout. One call will be passed to the recipient [email protected] with a timestamp in the format %D %H:%M, while the other call will be passed to the recipient [email protected] with a timestamp in the format %c. 29.6. Cluster alert configuration command examples The following sequential examples show some basic alert configuration commands to show the format to use to create alerts, add recipients, and display the configured alerts. Note that while you must install the alert agents themselves on each node in a cluster, you need to run the pcs commands only once. The following commands create a simple alert, add two recipients to the alert, and display the configured values. Since no alert ID value is specified, the system creates an alert ID value of alert . The first recipient creation command specifies a recipient of rec_value . Since this command does not specify a recipient ID, the value of alert-recipient is used as the recipient ID. The second recipient creation command specifies a recipient of rec_value2 . This command specifies a recipient ID of my-recipient for the recipient. This following commands add a second alert and a recipient for that alert. The alert ID for the second alert is my-alert and the recipient value is my-other-recipient . Since no recipient ID is specified, the system provides a recipient id of my-alert-recipient . The following commands modify the alert values for the alert my-alert and for the recipient my-alert-recipient . The following command removes the recipient my-alert-recipient from alert . The following command removes myalert from the configuration. 29.7. Writing a cluster alert agent There are three types of Pacemaker cluster alerts: node alerts, fencing alerts, and resource alerts. The environment variables that are passed to the alert agents can differ, depending on the type of alert. The following table describes the environment variables that are passed to alert agents and specifies when the environment variable is associated with a specific alert type. Table 29.2. Environment Variables Passed to Alert Agents Environment Variable Description CRM_alert_kind The type of alert (node, fencing, or resource) CRM_alert_version The version of Pacemaker sending the alert CRM_alert_recipient The configured recipient CRM_alert_node_sequence A sequence number increased whenever an alert is being issued on the local node, which can be used to reference the order in which alerts have been issued by Pacemaker. An alert for an event that happened later in time reliably has a higher sequence number than alerts for earlier events. Be aware that this number has no cluster-wide meaning. CRM_alert_timestamp A timestamp created prior to executing the agent, in the format specified by the timestamp-format meta option. This allows the agent to have a reliable, high-precision time of when the event occurred, regardless of when the agent itself was invoked (which could potentially be delayed due to system load or other circumstances). CRM_alert_node Name of affected node CRM_alert_desc Detail about event. For node alerts, this is the node's current state (member or lost). For fencing alerts, this is a summary of the requested fencing operation, including origin, target, and fencing operation error code, if any. For resource alerts, this is a readable string equivalent of CRM_alert_status . CRM_alert_nodeid ID of node whose status changed (provided with node alerts only) CRM_alert_task The requested fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rc The numerical return code of the fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rsc The name of the affected resource (resource alerts only) CRM_alert_interval The interval of the resource operation (resource alerts only) CRM_alert_target_rc The expected numerical return code of the operation (resource alerts only) CRM_alert_status A numerical code used by Pacemaker to represent the operation result (resource alerts only) When writing an alert agent, you must take the following concerns into account. Alert agents may be called with no recipient (if none is configured), so the agent must be able to handle this situation, even if it only exits in that case. Users may modify the configuration in stages, and add a recipient later. If more than one recipient is configured for an alert, the alert agent will be called once per recipient. If an agent is not able to run concurrently, it should be configured with only a single recipient. The agent is free, however, to interpret the recipient as a list. When a cluster event occurs, all alerts are fired off at the same time as separate processes. Depending on how many alerts and recipients are configured and on what is done within the alert agents, a significant load burst may occur. The agent could be written to take this into consideration, for example by queueing resource-intensive actions into some other instance, instead of directly executing them. Alert agents are run as the hacluster user, which has a minimal set of permissions. If an agent requires additional privileges, it is recommended to configure sudo to allow the agent to run the necessary commands as another user with the appropriate privileges. Take care to validate and sanitize user-configured parameters, such as CRM_alert_timestamp (whose content is specified by the user-configured timestamp-format ), CRM_alert_recipient , and all alert options. This is necessary to protect against configuration errors. In addition, if some user can modify the CIB without having hacluster -level access to the cluster nodes, this is a potential security concern as well, and you should avoid the possibility of code injection. If a cluster contains resources with operations for which the on-fail parameter is set to fence , there will be multiple fence notifications on failure, one for each resource for which this parameter is set plus one additional notification. Both the pacemaker-fenced and pacemaker-controld will send notifications. Pacemaker performs only one actual fence operation in this case, however, no matter how many notifications are sent. Note The alerts interface is designed to be backward compatible with the external scripts interface used by the ocf:pacemaker:ClusterMon resource. To preserve this compatibility, the environment variables passed to alert agents are available prepended with CRM_notify_ as well as CRM_alert_ . One break in compatibility is that the ClusterMon resource ran external scripts as the root user, while alert agents are run as the hacluster user. | [
"install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample /var/lib/pacemaker/alert_file.sh",
"touch /var/log/pcmk_alert_file.log chown hacluster:haclient /var/log/pcmk_alert_file.log chmod 600 /var/log/pcmk_alert_file.log pcs alert create id=alert_file description=\"Log events to a file.\" path=/var/lib/pacemaker/alert_file.sh pcs alert recipient add alert_file id=my-alert_logfile value=/var/log/pcmk_alert_file.log",
"install --mode=0755 /usr/share/pacemaker/alerts/alert_snmp.sh.sample /var/lib/pacemaker/alert_snmp.sh pcs alert create id=snmp_alert path=/var/lib/pacemaker/alert_snmp.sh meta timestamp-format=\"%Y-%m-%d,%H:%M:%S.%01N\" pcs alert recipient add snmp_alert value=192.168.1.2 pcs alert Alerts: Alert: snmp_alert (path=/var/lib/pacemaker/alert_snmp.sh) Meta options: timestamp-format=%Y-%m-%d,%H:%M:%S.%01N. Recipients: Recipient: snmp_alert-recipient (value=192.168.1.2)",
"install --mode=0755 /usr/share/pacemaker/alerts/alert_smtp.sh.sample /var/lib/pacemaker/alert_smtp.sh pcs alert create id=smtp_alert path=/var/lib/pacemaker/alert_smtp.sh options [email protected] pcs alert recipient add smtp_alert [email protected] pcs alert Alerts: Alert: smtp_alert (path=/var/lib/pacemaker/alert_smtp.sh) Options: [email protected] Recipients: Recipient: smtp_alert-recipient ([email protected])",
"pcs alert create path= path [id= alert-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert create id=my_alert path=/path/to/myscript.sh",
"pcs alert [config|show]",
"pcs alert update alert-id [path= path ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert remove alert-id",
"pcs alert recipient add alert-id value= recipient-value [id= recipient-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient update recipient-id [value= recipient-value ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient remove recipient-id",
"pcs alert recipient add my-alert value=my-alert-recipient id=my-recipient-id options value=some-address",
"pcs alert create id=my-alert path=/path/to/myscript.sh meta timeout=15s pcs alert recipient add my-alert [email protected] id=my-alert-recipient1 meta timestamp-format=\"%D %H:%M\" pcs alert recipient add my-alert [email protected] id=my-alert-recipient2 meta timestamp-format=\"%c\"",
"pcs alert create path=/my/path pcs alert recipient add alert value=rec_value pcs alert recipient add alert value=rec_value2 id=my-recipient pcs alert config Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2)",
"pcs alert create id=my-alert path=/path/to/script description=alert_description options option1=value1 opt=val meta timeout=50s timestamp-format=\"%H%B%S\" pcs alert recipient add my-alert value=my-other-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=value1 Meta options: timestamp-format=%H%B%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient)",
"pcs alert update my-alert options option1=newvalue1 meta timestamp-format=\"%H%M%S\" pcs alert recipient update my-alert-recipient options option1=new meta timeout=60s pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=%H%M%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s",
"pcs alert recipient remove my-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=\"%M%B%S\" timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s",
"pcs alert remove myalert pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-pacemaker-alert-agents_configuring-and-managing-high-availability-clusters |
27.2.6. Additional Command-Line Options | 27.2.6. Additional Command-Line Options Additional command-line options for at and batch include the following: Table 27.1. at and batch Command-Line Options Option Description -f Read the commands or shell script from a file instead of specifying them at the prompt. -m Send email to the user when the job has been completed. -v Display the time that the job is executed. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-autotasks-commandline-options |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/providing-direct-documentation-feedback_openjdk |
Chapter 15. Using the validation framework | Chapter 15. Using the validation framework Red Hat OpenStack Platform includes a validation framework that you can use to verify the requirements and functionality of the undercloud and overcloud. The framework includes two types of validations: Manual Ansible-based validations, which you execute through the openstack tripleo validator command set. Automatic in-flight validations, which execute during the deployment process. You must understand which validations you want to run, and skip validations that are not relevant to your environment. For example, the pre-deployment validation includes a test for TLS-everywhere. If you do not intend to configure your environment for TLS-everywhere, this test fails. Use the --validation option in the openstack tripleo validator run command to refine the validation according to your environment. 15.1. Ansible-based validations During the installation of Red Hat OpenStack Platform director, director also installs a set of playbooks from the openstack-tripleo-validations package. Each playbook contains tests for certain system requirements and a set of groups that define when to run the test: no-op Validations that run a no-op (no operation) task to verify to workflow functions correctly. These validations run on both the undercloud and overcloud. prep Validations that check the hardware configuration of the undercloud node. Run these validation before you run the openstack undercloud install command. openshift-on-openstack Validations that check that the environment meets the requirements to be able to deploy OpenShift on OpenStack. pre-introspection Validations to run before the nodes introspection using Ironic Inspector. pre-deployment Validations to run before the openstack overcloud deploy command. post-deployment Validations to run after the overcloud deployment has finished. pre-upgrade Validations to validate your OpenStack deployment before an upgrade. post-upgrade Validations to validate your OpenStack deployment after an upgrade. 15.2. Listing validations Run the openstack tripleo validator list command to list the different types of validations available. Procedure Source the stackrc file. Run the openstack tripleo validator list command: To list all validations, run the command without any options: To list validations in a group, run the command with the --group option: Note For a full list of options, run openstack tripleo validator list --help . 15.3. Running validations To run a validation or validation group, use the openstack tripleo validator run command. To see a full list of options, use the openstack tripleo validator run --help command. Procedure Source the stackrc file: Create and validate a static inventory file called inventory.yaml . Enter the openstack tripleo validator run command: To run a single validation, enter the command with the --validation option and the name of the validation. For example, to check the memory requirements of each node, enter --validation check-ram : If the overcloud uses a plan name that is different to the default overcloud name, set the plan name with the --plan option: To run multiple specific validations, use the --validation option with a comma-separated list of the validations that you want to run. For more information about viewing the list of available validations, see Listing validations . To run all validations in a group, enter the command with the --group option: To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report: 15.4. Viewing validation history Director saves the results of each validation after you run a validation or group of validations. View past validation results with the openstack tripleo validator show history command. Prerequisites You have run a validation or group of validations. Procedure Source the stackrc file: View a list of all validations: To view history for a specific validation type, run the same command with the --validation option: View the log for a specific validation UUID with the openstack tripleo validator show run --full command: 15.5. Validation framework log format After you run a validation or group of validations, director saves a JSON-formatted log from each validation in the /var/logs/validations directory. You can view the file manually or use the openstack tripleo validator show run --full command to display the log for a specific validation UUID. Each validation log file follows a specific format: <UUID>_<Name>_<Time> UUID The Ansible UUID for the validation. Name The Ansible name for the validation. Time The start date and time for when you ran the validation. Each validation log contains three main parts: plays stats validation_output plays The plays section contains information about the tasks that the director performed as part of the validation: play A play is a group of tasks. Each play section contains information about that particular group of tasks, including the start and end times, the duration, the host groups for the play, and the validation ID and path. tasks The individual Ansible tasks that director runs to perform the validation. Each tasks section contains a hosts section, which contains the action that occurred on each individual host and the results from the execution of the actions. The tasks section also contains a task section, which contains the duration of the task. stats The stats section contains a basic summary of the outcome of all tasks on each host, such as the tasks that succeeded and failed. validation_output If any tasks failed or caused a warning message during a validation, the validation_output contains the output of that failure or warning. 15.6. Validation framework log output formats The default behaviour of the validation framework is to save validation logs in JSON format. You can change the output of the logs with the ANSIBLE_STDOUT_CALLBACK environment variable. To change the validation output log format, run a validation and include the --extra-env-vars ANSIBLE_STDOUT_CALLBACK=<callback> option: Replace <callback> with an Ansible output callback. To view a list of the standard Ansible output callbacks, run the following command: The validation framework includes the following additional callbacks. validation_json The framework saves JSON-formatted validation results as a log file in /var/logs/validations . This is the default callback for the validation framework. validation_stdout The framework displays JSON-formatted validation results on screen. http_json The framework sends JSON-formatted validation results to an external logging server. You must also include additional environment variables for this callback: HTTP_JSON_SERVER The URL for the external server. HTTP_JSON_PORT The port for the API entry point of the external server. The default port in 8989. Set these environment variables with additional --extra-env-vars options: Important Before you use the http_json callback, you must add http_json to the callback_whitelist parameter in your ansible.cfg file: 15.7. In-flight validations Red Hat OpenStack Platform includes in-flight validations in the templates of composable services. In-flight validations verify the operational status of services at key steps of the overcloud deployment process. In-flight validations run automatically as part of the deployment process. Some in-flight validations also use the roles from the openstack-tripleo-validations package. | [
"source ~/stackrc",
"openstack tripleo validator list",
"openstack tripleo validator list --group prep",
"source ~/stackrc",
"tripleo-ansible-inventory --static-yaml-inventory inventory.yaml openstack tripleo validator run --group pre-introspection -i inventory.yaml",
"openstack tripleo validator run --validation check-ram",
"openstack tripleo validator run --validation check-ram --plan myovercloud",
"openstack tripleo validator run --group prep",
"openstack tripleo validator show run --full <UUID>",
"source ~/stackrc",
"openstack tripleo validator show history",
"openstack tripleo validator show history --validation ntp",
"openstack tripleo validator show run --full 7380fed4-2ea1-44a1-ab71-aab561b44395",
"openstack tripleo validator run --extra-env-vars ANSIBLE_STDOUT_CALLBACK=<callback> --validation check-ram",
"ansible-doc -t callback -l",
"openstack tripleo validator run --extra-env-vars ANSIBLE_STDOUT_CALLBACK=http_json --extra-env-vars HTTP_JSON_SERVER=http://logserver.example.com --extra-env-vars HTTP_JSON_PORT=8989 --validation check-ram",
"callback_whitelist = http_json"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_using-the-validation-framework |
Chapter 41. Azure Storage Queue Service Component | Chapter 41. Azure Storage Queue Service Component Available as of Camel version 2.19 The Azure Queue component supports storing and retrieving the messages to/from Azure Storage Queue service. Prerequisites You must have a valid Microsoft Azure account. More information is available at Azure Portal . 41.1. URI Format azure-queue://accountName/queueName[?options] The queue will be created if it does not already exist. You can append query options to the URI in the following format, ?options=value&option2=value&... For example in order to get a message content from the queue messageQueue in the camelazure storage account and, use the following snippet: from("azure-queue:/camelazure/messageQueue"). to("file://queuedirectory"); 41.2. URI Options The Azure Storage Queue Service component has no options. The Azure Storage Queue Service endpoint is configured using URI syntax: with the following path and query parameters: 41.2.1. Path Parameters (1 parameters): Name Description Default Type containerAndQueueUri Required Container Queue compact Uri String 41.2.2. Query Parameters (10 parameters): Name Description Default Type azureQueueClient (common) The queue service client CloudQueue credentials (common) Set the storage credentials, required in most cases StorageCredentials bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern messageTimeToLive (producer) Message Time To Live in seconds int messageVisibilityDelay (producer) Message Visibility Delay in seconds int operation (producer) The operation to do in case the user does not want to send only a message. There are three enums options and the value can be one of the following: sendBatchMessage, deleteMessage, listQueues listQueues QueueServiceOperations queuePrefix (producer) Set a prefix which can be used for listing the queues String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 41.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.azure-queue.enabled Enable azure-queue component true Boolean camel.component.azure-queue.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean Required Azure Storage Queue Service component options You have to provide the containerAndQueue URI and the credentials. 41.4. Usage 41.4.1. Message headers evaluated by the Azure Storage Queue Service producer Header Type Description 41.4.2. Message headers set by the Azure Storage Queue Service producer Header Type Description 41.4.3. Message headers set by the Azure Storage Queue Service producer consumer Header Type Description 41.4.4. Azure Queue Service operations Operation Description listQueues List the queues. createQueue Create the queue. deleteQueue Delete the queue. addMessage Add a message to the queue. retrieveMessage Retrieve a message from the queue. peekMessage View the message inside the queue, for example, to determine whether the message arrived at the correct queue. updateMessage Update the message in the queue. deleteMessage Delete the message in the queue. 41.4.5. Azure Queue Client configuration If your Camel Application is running behind a firewall or if you need to have more control over the Azure Queue Client configuration, you can create your own instance: StorageCredentials credentials = new StorageCredentialsAccountAndKey("camelazure", "thekey"); CloudQueue client = new CloudQueue("camelazure", credentials); registry.bind("azureQueueClient", client); and refer to it in your Camel azure-queue component configuration: from("azure-queue:/camelazure/messageQueue?azureQueueClient=#client") .to("mock:result"); 41.5. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-azure</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.19.0 or higher). 41.6. See Also Configuring Camel Component Endpoint Getting Started Azure Component | [
"azure-queue://accountName/queueName[?options]",
"from(\"azure-queue:/camelazure/messageQueue\"). to(\"file://queuedirectory\");",
"azure-queue:containerAndQueueUri",
"StorageCredentials credentials = new StorageCredentialsAccountAndKey(\"camelazure\", \"thekey\"); CloudQueue client = new CloudQueue(\"camelazure\", credentials); registry.bind(\"azureQueueClient\", client);",
"from(\"azure-queue:/camelazure/messageQueue?azureQueueClient=#client\") .to(\"mock:result\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-azure</artifactId> <version>USD{camel-version}</version> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/azure-queue-component |
3.7. Red Hat OpenStack Platform 16.2 for RHEL 8 x86_64 (RPMs) | 3.7. Red Hat OpenStack Platform 16.2 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the openstack-16.2-for-rhel-8-x86_64-rpms repository. Table 3.7. Red Hat OpenStack Platform 16.2 for RHEL 8 x86_64 (RPMs) Packages Name Version Advisory XStatic-Angular-common 1.5.8.0-11.el8ost.1 RHEA-2021:3483 XStatic-Magic-Search-common 0.2.5.1-13.el8ost.1 RHEA-2021:3483 ansible-collections-openstack 1.4.1-2.20210608074808.4a1b092.el8ost.1 RHEA-2021:3483 ansible-config_template 1.1.2-2.20210531235813.57efa55.el8ost.2 RHEA-2021:3483 ansible-pacemaker 1.0.4-2.20210527194420.accaf26.el8ost.2 RHEA-2021:3483 ansible-role-atos-hsm 0.1.1-2.20210527154957.1269408.el8ost.2 RHEA-2021:3483 ansible-role-chrony 1.0.4-2.20210601014939.5580549.el8ost.2 RHEA-2021:3483 ansible-role-container-registry 1.3.0-2.20210527203727.41c93a4.el8ost.1 RHEA-2021:3483 ansible-role-lunasa-hsm 1.1.1-2.20210603174813.26da379.el8ost.1 RHEA-2021:3483 ansible-role-network-runner017 0.1.7-4.el8ost.1 RHEA-2021:3483 ansible-role-openstack-ml2 3.0.1-2.20210528062320.e24d01c.el8ost.1 RHEA-2021:3483 ansible-role-openstack-operations 0.0.1-2.20210527195623.3937ea4.el8ost.2 RHEA-2021:3483 ansible-role-redhat-subscription 1.1.5-3.20210527214016.17a8bd5.el8ost.1 RHEA-2021:3483 ansible-role-thales-hsm 0.2.1-2.20210527212935.52af8e8.el8ost.1 RHEA-2021:3483 ansible-role-tripleo-modify-image 1.2.3-2.20210601001841.b304c89.el8ost.2 RHEA-2021:3483 ansible-tripleo-ipa 0.2.2-2.20210527211841.9159108.el8ost.2 RHEA-2021:3483 ansible-tripleo-ipsec 9.2.0-2.20210527201530.ffe104c.el8ost.2 RHEA-2021:3483 ansible-tripleo-powerflex 0.0.1-2.20210527200651.8633fbd.el8ost.2 RHEA-2021:3483 bootswatch-common 3.3.7.0-12.el8ost.1 RHEA-2021:3483 bootswatch-fonts 3.3.7.0-12.el8ost.1 RHEA-2021:3483 collectd 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-amqp 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-amqp1 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-apache 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-bind 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-ceph 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-chrony 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-connectivity 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-curl 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-curl_json 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-curl_xml 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-dbi 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-disk 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-dns 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-dpdk_telemetry 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-generic-jmx 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-hugepages 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-ipmi 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-iptables 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-libpod-stats 1.0.3-1.el8ost.2 RHEA-2021:3483 collectd-log_logstash 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-mcelog 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-memcachec 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-mysql 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-netlink 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-openldap 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-ovs-events 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-ovs-stats 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-pcie-errors 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-ping 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-pmu 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-procevent 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-python 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-rdt 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-sensors 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-sensubility 0.1.8-2.el8ost.1 RHEA-2021:3483 collectd-smart 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-snmp 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-snmp-agent 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-sysevent 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-turbostat 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-utils 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-virt 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-write_http 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-write_kafka 5.11.0-8.el8ost.1 RHEA-2021:3483 collectd-write_prometheus 5.11.0-8.el8ost.1 RHEA-2021:3483 cpp-hocon 0.1.8-3.el8ost RHEA-2021:3483 crudini 0.9-11.el8ost.1 RHEA-2021:3483 dib-utils 0.0.11-2.20210527224837.51661c3.el8ost.1 RHEA-2021:3483 dibbler-client 1.0.1-14.el8ost.1 RHEA-2021:3483 dibbler-relay 1.0.1-14.el8ost.1 RHEA-2021:3483 dibbler-requestor 1.0.1-14.el8ost.1 RHEA-2021:3483 dibbler-server 1.0.1-14.el8ost.1 RHEA-2021:3483 diskimage-builder 3.9.0-2.20210603124809.cb96117.el8ost.1 RHEA-2021:3483 double-conversion 3.1.5-4.el8ost.1 RHEA-2021:3483 dumb-init 1.1.3-20.el8ost.1 RHEA-2021:3483 elixir 1.9.1-2.el8ost.1 RHEA-2021:3483 erlang-asn1 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-compiler 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-crypto 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-eldap 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-erts 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-hipe 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-inets 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-kernel 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-mnesia 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-os_mon 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-parsetools 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-public_key 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-runtime_tools 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-sasl 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-sd_notify 1.1-2.el8ost.1 RHEA-2021:3483 erlang-snmp 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-ssl 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-stdlib 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-syntax_tools 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-tools 23.3.4-1.el8ost.2 RHEA-2021:3483 erlang-xmerl 23.3.4-1.el8ost.2 RHEA-2021:3483 etcd 3.3.23-3.1.el8ost.1 RHSA-2021:3487 facter 3.9.3-14.el8ost.1 RHEA-2021:3483 fontawesome-fonts 4.7.0-11.el8ost.1 RHEA-2021:3483 fontawesome-fonts-web 4.7.0-11.el8ost.1 RHEA-2021:3483 gnocchi-api 4.3.6-2.20210528111817.d57fa67.el8ost RHEA-2021:3483 gnocchi-common 4.3.6-2.20210528111817.d57fa67.el8ost RHEA-2021:3483 gnocchi-metricd 4.3.6-2.20210528111817.d57fa67.el8ost RHEA-2021:3483 gnocchi-statsd 4.3.6-2.20210528111817.d57fa67.el8ost RHEA-2021:3483 golang-github-BurntSushi-toml-devel 0-0.11.git2ceedfe.1.el8ost RHEA-2021:3483 golang-github-Sirupsen-logrus-devel 1.1.1-5.el8ost.1 RHEA-2021:3483 golang-github-davecgh-go-spew-devel 0-0.12.git6d21280.1.el8ost.1 RHEA-2021:3483 golang-github-go-ini-ini-devel 1.39.3-0.1.gitf55231c.el8ost.1 RHEA-2021:3483 golang-github-golang-sys-devel 0-0.16.20181125git62eef0e.1.el8ost.1 RHEA-2021:3483 golang-github-infrawatch-apputils 0.1-3.git8439bdc.el8ost.1 RHEA-2021:3483 golang-github-pmezard-go-difflib-devel 0-0.10.git792786c.1.el8ost.1 RHEA-2021:3483 golang-github-streadway-amqp-devel 0-0.3.20190404git75d898a.el8ost.1 RHEA-2021:3483 golang-github-stretchr-objx-devel 0-0.13.git1a9d0bb.1.el8ost RHEA-2021:3483 golang-github-stretchr-testify-devel 1.2.2-4.el8ost RHEA-2021:3483 golang-github-urfave-cli-devel 1.20.0-4.el8ost.1 RHEA-2021:3483 golang-github-vbatts-tar-split 0.11.1-4.el8ost RHEA-2021:3483 golang-golangorg-crypto-devel 0-0.15.20181125git3d3f9f4.1.el8ost RHEA-2021:3483 golang-gopkg-check-devel 1-15.el8ost RHEA-2021:3483 golang-gopkg-yaml-devel 1-17.el8ost RHEA-2021:3483 golang-gopkg-yaml-devel-v2 1-17.el8ost RHEA-2021:3483 golang-qpid-apache 0.32.0-rc1.7.el8ost.1 RHEA-2021:3483 heat-cfntools 1.4.2-11.el8ost.1 RHEA-2021:3483 hiera 3.3.1-10.el8ost.1 RHEA-2021:3483 intel-cmt-cat 3.1.1-3.el8ost RHEA-2021:3483 jevents 109-3.el8ost RHEA-2021:3483 kuryr-binding-scripts 1.1.1-2.20210527165521.41e6964.el8ost.1 RHEA-2021:3483 leatherman 1.4.5-6.el8ost.1 RHEA-2021:3483 libcollectdclient 5.11.0-8.el8ost.1 RHEA-2021:3483 libdbi 0.9.0-11.el8ost RHEA-2021:3483 liberasurecode 1.5.0-10.el8ost.1 RHEA-2021:3483 liboping 1.10.0-11.el8ost RHEA-2021:3483 libsodium 1.0.16-5.el8ost RHEA-2021:3483 libwebsockets 2.4.2-2.el8 RHEA-2021:3483 mdi-common 1.4.57.0-14.el8ost.1 RHEA-2021:3483 mdi-fonts 1.4.57.0-14.el8ost.1 RHEA-2021:3483 ndisc6 1.0.3-10.el8ost RHEA-2021:3483 novnc 1.1.0-2.el8ost RHEA-2021:3483 octavia-amphora-image-x86_64 16.2-20210902.2.el8ost RHEA-2021:3485 openstack-aodh-api 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 openstack-aodh-common 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 openstack-aodh-compat 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 openstack-aodh-evaluator 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 openstack-aodh-expirer 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 openstack-aodh-listener 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 openstack-aodh-notifier 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 openstack-barbican 9.0.2-2.20210528102937.3b66ec1.el8ost.2 RHEA-2021:3483 openstack-barbican-api 9.0.2-2.20210528102937.3b66ec1.el8ost.2 RHEA-2021:3483 openstack-barbican-common 9.0.2-2.20210528102937.3b66ec1.el8ost.2 RHEA-2021:3483 openstack-barbican-keystone-listener 9.0.2-2.20210528102937.3b66ec1.el8ost.2 RHEA-2021:3483 openstack-barbican-worker 9.0.2-2.20210528102937.3b66ec1.el8ost.2 RHEA-2021:3483 openstack-ceilometer-central 13.1.3-2.20210608074808.dad2447.el8ost.1 RHEA-2021:3483 openstack-ceilometer-common 13.1.3-2.20210608074808.dad2447.el8ost.1 RHEA-2021:3483 openstack-ceilometer-compute 13.1.3-2.20210608074808.dad2447.el8ost.1 RHEA-2021:3483 openstack-ceilometer-ipmi 13.1.3-2.20210608074808.dad2447.el8ost.1 RHEA-2021:3483 openstack-ceilometer-notification 13.1.3-2.20210608074808.dad2447.el8ost.1 RHEA-2021:3483 openstack-ceilometer-polling 13.1.3-2.20210608074808.dad2447.el8ost.1 RHEA-2021:3483 openstack-cinder 15.6.1-2.20210528143332.el8ost.3 RHEA-2021:3483 openstack-dashboard 16.2.3-2.20210528135143.2d2f944.el8ost.1 RHEA-2021:3483 openstack-dashboard-theme 16.0.2-2.el8ost RHEA-2021:3483 openstack-designate-agent 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-designate-api 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-designate-central 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-designate-common 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-designate-mdns 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-designate-producer 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-designate-sink 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-designate-ui 9.0.1-2.20210527205734.51a4458.el8ost.1 RHEA-2021:3483 openstack-designate-worker 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 openstack-ec2-api 9.0.1-2.20210528091017.f901570.el8ost.2 RHEA-2021:3483 openstack-glance 19.0.5-2.20210528112824.b3de9da.el8ost.1 RHEA-2021:3483 openstack-heat-agents 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 openstack-heat-api 13.1.1-2.20210528124755.437c43d.el8ost.1 RHEA-2021:3483 openstack-heat-api-cfn 13.1.1-2.20210528124755.437c43d.el8ost.1 RHEA-2021:3483 openstack-heat-common 13.1.1-2.20210528124755.437c43d.el8ost.1 RHEA-2021:3483 openstack-heat-engine 13.1.1-2.20210528124755.437c43d.el8ost.1 RHEA-2021:3483 openstack-heat-monolith 13.1.1-2.20210528124755.437c43d.el8ost.1 RHEA-2021:3483 openstack-heat-ui 2.0.2-2.20210528081858.7d643ab.el8ost.2 RHEA-2021:3483 openstack-ironic-api 13.0.8-2.20210528130908.f1b87e8.el8ost.1 RHEA-2021:3483 openstack-ironic-common 13.0.8-2.20210528130908.f1b87e8.el8ost.1 RHEA-2021:3483 openstack-ironic-conductor 13.0.8-2.20210528130908.f1b87e8.el8ost.1 RHEA-2021:3483 openstack-ironic-inspector 9.2.5-2.20210528095817.d697c7c.el8ost.2 RHEA-2021:3483 openstack-ironic-inspector-dnsmasq 9.2.5-2.20210528095817.d697c7c.el8ost.2 RHEA-2021:3483 openstack-ironic-python-agent 5.0.5-2.20210611024819.el8ost.3 RHEA-2021:3483 openstack-ironic-python-agent-builder 2.8.0-2.20210612124808.609531a.el8ost.1 RHEA-2021:3483 openstack-ironic-staging-drivers 0.12.1-2.20210527165518.b5a8aaf.el8ost.1 RHEA-2021:3483 openstack-ironic-ui 3.5.5-2.20210528021151.1f091c9.el8ost.1 RHEA-2021:3483 openstack-keystone 16.0.2-2.20210608194824.c654559.el8ost.1 RHEA-2021:3483 openstack-manila 9.1.6-2.20210603154809.479a8a7.el8ost.1 RHEA-2021:3483 openstack-manila-share 9.1.6-2.20210603154809.479a8a7.el8ost.1 RHEA-2021:3483 openstack-manila-ui 2.19.3-2.20210528073804.el8ost.2 RHEA-2021:3483 openstack-mistral-all 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 openstack-mistral-api 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 openstack-mistral-common 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 openstack-mistral-engine 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 openstack-mistral-event-engine 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 openstack-mistral-executor 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 openstack-mistral-notifier 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 openstack-neutron 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-bgp-dragent 15.0.1-2.20210527193223.56de1c4.el8ost.1 RHEA-2021:3483 openstack-neutron-bigswitch-agent 15.0.3-2.20210528133247.b53155a.el8ost.1 RHEA-2021:3483 openstack-neutron-common 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-dynamic-routing-common 15.0.1-2.20210527193223.56de1c4.el8ost.1 RHEA-2021:3483 openstack-neutron-l2gw-agent 15.0.1-2.20210527193417.1f84472.el8ost.1 RHEA-2021:3483 openstack-neutron-linuxbridge 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-macvtap-agent 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-metering-agent 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-ml2 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-openvswitch 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-rpc-server 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-neutron-sriov-nic-agent 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 openstack-nova 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-api 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-common 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-compute 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-conductor 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-console 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-migration 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-novncproxy 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-scheduler 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-serialproxy 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-nova-spicehtml5proxy 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 openstack-octavia-amphora-agent 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 openstack-octavia-api 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 openstack-octavia-common 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 openstack-octavia-diskimage-create 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 openstack-octavia-health-manager 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 openstack-octavia-housekeeping 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 openstack-octavia-ui 4.0.2-2.20210604014813.8463b8f.el8ost.1 RHEA-2021:3483 openstack-octavia-worker 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 openstack-panko-api 7.0.1-0.20191017041323.9b551e7.el8ost.1 RHEA-2021:3483 openstack-panko-common 7.0.1-0.20191017041323.9b551e7.el8ost.1 RHEA-2021:3483 openstack-placement-api 2.0.1-2.20210527201546.ff55034.el8ost.1 RHEA-2021:3483 openstack-placement-common 2.0.1-2.20210527201546.ff55034.el8ost.1 RHEA-2021:3483 openstack-selinux 0.8.28-2.20210612124808.9cd3782.el8ost.1 RHEA-2021:3483 openstack-swift-account 2.23.3-2.20210607090143.81d845f.el8ost.1 RHEA-2021:3483 openstack-swift-container 2.23.3-2.20210607090143.81d845f.el8ost.1 RHEA-2021:3483 openstack-swift-object 2.23.3-2.20210607090143.81d845f.el8ost.1 RHEA-2021:3483 openstack-swift-proxy 2.23.3-2.20210607090143.81d845f.el8ost.1 RHEA-2021:3483 openstack-tempest 26.1.0-2.20210531080657.271f820.el8ost.1 RHEA-2021:3483 openstack-tempest-all 26.1.0-2.20210531080657.271f820.el8ost.1 RHEA-2021:3483 openstack-tripleo-common 11.6.1-2.20210603180856.el8ost.3 RHEA-2021:3483 openstack-tripleo-common-container-base 11.6.1-2.20210603180856.el8ost.3 RHEA-2021:3483 openstack-tripleo-common-containers 11.6.1-2.20210603180856.el8ost.3 RHEA-2021:3483 openstack-tripleo-common-devtools 11.6.1-2.20210603180856.el8ost.3 RHEA-2021:3483 openstack-tripleo-heat-templates 11.5.1-2.20210603174823.el8ost.9 RHEA-2021:3483 openstack-tripleo-image-elements 10.6.3-2.20210601000815.9b50a3c.el8ost.2 RHEA-2021:3483 openstack-tripleo-puppet-elements 11.3.1-2.20210528114722.9d10d8e.el8ost.2 RHEA-2021:3483 openstack-tripleo-validations 11.6.1-2.20210612074808.8644a02.el8ost.1 RHEA-2021:3483 openstack-zaqar 9.0.1-2.20210528072106.15a8ad7.el8ost.1 RHEA-2021:3483 os-apply-config 10.6.0-2.20210528113933.41d86e3.el8ost.2 RHEA-2021:3483 os-collect-config 10.6.0-2.20210528112824.5b8355d.el8ost.2 RHEA-2021:3483 os-net-config 11.5.0-2.20210528113720.48c6710.el8ost.2 RHEA-2021:3483 os-refresh-config 10.4.1-2.20210528093925.d0fdb42.el8ost.2 RHEA-2021:3483 paunch-services 5.5.1-2.20210527204730.9b6bef4.el8ost.1 RHEA-2021:3483 plotnetcfg 0.4.1-14.el8ost.1 RHEA-2021:3483 pmu-data 109-3.el8ost RHEA-2021:3483 pmu-tools 109-3.el8ost RHEA-2021:3483 puppet 5.5.10-10.el8ost.1 RHEA-2021:3483 puppet-aodh 15.5.0-2.20210601020548.09972d8.el8ost.2 RHEA-2021:3483 puppet-apache 5.1.0-2.20210528023135.1fa9b1c.el8ost.2 RHEA-2021:3483 puppet-archive 4.2.1-2.20210527171609.0538163.el8ost.2 RHEA-2021:3483 puppet-auditd 2.2.1-2.20210527172515.189b22b.el8ost.2 RHEA-2021:3483 puppet-barbican 15.5.0-2.20210601003945.6881351.el8ost.2 RHEA-2021:3483 puppet-cassandra 2.7.4-2.20210528035721.9954256.el8ost.2 RHEA-2021:3483 puppet-ceilometer 15.5.0-2.20210601004737.2f62d7f.el8ost.2 RHEA-2021:3483 puppet-ceph 3.1.2-2.20210603181657.ffa80da.el8ost.1 RHEA-2021:3483 puppet-certmonger 2.7.0-2.20210528094925.b2f2d23.el8ost.1 RHEA-2021:3483 puppet-cinder 15.5.0-2.20210601004754.d67dac0.el8ost.2 RHEA-2021:3483 puppet-collectd 12.0.1-2.20210528063800.4686e16.el8ost.1 RHEA-2021:3483 puppet-concat 6.1.0-2.20210528022416.9baa8fc.el8ost.2 RHEA-2021:3483 puppet-contrail 1.0.1-2.20210528040000.6f87929.el8ost.2 RHEA-2021:3483 puppet-corosync 6.0.2-2.20210528025812.961add3.el8ost.2 RHEA-2021:3483 puppet-datacat 0.6.2-2.20210528040724.5cce8f2.el8ost.2 RHEA-2021:3483 puppet-designate 15.6.0-2.20210601020041.699d285.el8ost.2 RHEA-2021:3483 puppet-dns 6.2.1-2.20210528040902.2ae1cd7.el8ost.2 RHEA-2021:3483 puppet-ec2api 15.4.1-2.20210528041724.e38e26c.el8ost.2 RHEA-2021:3483 puppet-elasticsearch 6.4.0-2.20210528041855.725afd6.el8ost.2 RHEA-2021:3483 puppet-etcd 1.12.3-2.20210528042618.123d2af.el8ost.2 RHEA-2021:3483 puppet-fdio 18.2-2.20210528042751.6fd1c8e.el8ost.2 RHEA-2021:3483 puppet-firewall 2.1.0-2.20210528025107.4f4437a.el8ost.2 RHEA-2021:3483 puppet-git 0.5.0-2.20210528043748.4e4498e.el8ost.2 RHEA-2021:3483 puppet-glance 15.5.0-2.20210601005740.8a23345.el8ost.2 RHEA-2021:3483 puppet-gnocchi 15.5.0-2.20210601005850.c830d4b.el8ost.2 RHEA-2021:3483 puppet-haproxy 4.1.0-2.20210528044603.df96ffc.el8ost.2 RHEA-2021:3483 puppet-headless 5.5.10-10.el8ost.1 RHEA-2021:3483 puppet-heat 15.5.0-2.20210601010737.31e48ae.el8ost.2 RHEA-2021:3483 puppet-horizon 15.5.0-2.20210601010851.c300380.el8ost.2 RHEA-2021:3483 puppet-inifile 3.1.0-2.20210528022247.91efced.el8ost.2 RHEA-2021:3483 puppet-ipaclient 2.5.2-2.20210528044835.b086731.el8ost.2 RHEA-2021:3483 puppet-ironic 15.5.0-2.20210601011633.d553541.el8ost.2 RHEA-2021:3483 puppet-java 5.0.1-2.20210528045554.e57cbc8.el8ost.2 RHEA-2021:3483 puppet-kafka 5.3.1-2.20210528045828.88aa866.el8ost.2 RHEA-2021:3483 puppet-keepalived 0.0.2-2.20210528050548.bbca37a.el8ost.2 RHEA-2021:3483 puppet-keystone 15.5.0-2.20210601001735.1dc5b6e.el8ost.2 RHEA-2021:3483 puppet-kibana3 0.0.4-2.20210528050729.6ca9631.el8ost.2 RHEA-2021:3483 puppet-kmod 2.3.1-2.20210528051537.41e2a2b.el8ost.2 RHEA-2021:3483 puppet-manila 15.5.0-2.20210601014536.9c6604a.el8ost.2 RHEA-2021:3483 puppet-memcached 6.0.0-2.20210528123058.4c70dbd.el8ost.2 RHEA-2021:3483 puppet-midonet 1.0.0-2.20210528053422.a8cec1d.el8ost.1 RHEA-2021:3483 puppet-mistral 15.5.0-2.20210601011954.5dcd237.el8ost.2 RHEA-2021:3483 puppet-module-data 0.5.1-2.20210528052520.28dafce.el8ost.2 RHEA-2021:3483 puppet-mysql 10.4.0-2.20210528024030.95f9b98.el8ost.2 RHEA-2021:3483 puppet-n1k-vsm 0.0.2-2.20210528053535.92401b8.el8ost.2 RHEA-2021:3483 puppet-neutron 15.6.0-2.20210601015533.7f36270.el8ost.2 RHEA-2021:3483 puppet-nova 15.8.0-2.20210601013941.99789e3.el8ost.2 RHEA-2021:3483 puppet-nssdb 1.0.1-2.20210528031836.2ed2a2d.el8ost.2 RHEA-2021:3483 puppet-octavia 15.5.0-2.20210601021142.2f54828.el8ost.2 RHEA-2021:3483 puppet-opendaylight 8.4.3-2.20210528054417.bbe7ce5.el8ost.1 RHEA-2021:3483 puppet-openstack_extras 15.4.1-2.20210601022242.6ab7806.el8ost.2 RHEA-2021:3483 puppet-openstacklib 15.5.0-2.20210531234811.e3b61ab.el8ost.2 RHEA-2021:3483 puppet-oslo 15.5.0-2.20210531235814.883fa53.el8ost.2 RHEA-2021:3483 puppet-ovn 15.5.0-2.20210601013539.a6b0f69.el8ost.2 RHEA-2021:3483 puppet-pacemaker 1.1.0-2.20210528101831.6e272bf.el8ost.2 RHEA-2021:3483 puppet-panko 15.4.1-0.20191014140135.49b7b3e.el8ost.1 RHEA-2021:3483 puppet-placement 2.5.0-2.20210601002849.8fe110e.el8ost.2 RHEA-2021:3483 puppet-qdr 4.4.1-2.20210528054536.d141271.el8ost.2 RHEA-2021:3483 puppet-rabbitmq 10.1.2-2.20210528110135.8b9b006.el8ost.2 RHEA-2021:3483 puppet-redis 4.2.2-2.20210528033823.be8d097.el8ost.2 RHEA-2021:3483 puppet-remote 10.0.0-2.20210528032009.7420908.el8ost.2 RHEA-2021:3483 puppet-rsync 1.1.3-2.20210528081652.b3ee352.el8ost.2 RHEA-2021:3483 puppet-rsyslog 3.3.1-2.20210528055214.0c2b6c8.el8ost.2 RHEA-2021:3483 puppet-sahara 15.4.1-2.20210601012947.e8c5a9d.el8ost.2 RHEA-2021:3483 puppet-server 5.5.10-10.el8ost.1 RHEA-2021:3483 puppet-snmp 3.9.0-2.20210528055525.5d73485.el8ost.2 RHEA-2021:3483 puppet-ssh 6.0.0-2.20210528033955.65570a3.el8ost.2 RHEA-2021:3483 puppet-staging 1.0.4-2.20210528023218.b466d93.el8ost.2 RHEA-2021:3483 puppet-stdlib 6.1.0-2.20210527224837.5aa891c.el8ost.2 RHEA-2021:3483 puppet-swift 15.5.0-2.20210601012532.1fdb986.el8ost.2 RHEA-2021:3483 puppet-sysctl 0.0.12-2.20210528024924.a3d160d.el8ost.2 RHEA-2021:3483 puppet-systemd 2.10.0-2.20210528111029.03d94fa.el8ost.2 RHEA-2021:3483 puppet-timezone 5.1.1-2.20210528060202.21b4a58.el8ost.2 RHEA-2021:3483 puppet-tomcat 3.1.0-2.20210528051721.a3f92d1.el8ost.2 RHEA-2021:3483 puppet-tripleo 11.6.2-2.20210603175725.el8ost.2 RHEA-2021:3483 puppet-trove 15.4.1-2.20210601003850.0eacf4d.el8ost.2 RHEA-2021:3483 puppet-vcsrepo 3.0.0-2.20210528032828.b06d5d3.el8ost.2 RHEA-2021:3483 puppet-veritas_hyperscale 1.0.0-2.20210527173407.7c7868a.el8ost.2 RHEA-2021:3483 puppet-vswitch 11.5.0-2.20210601000818.5d96dab.el8ost.2 RHEA-2021:3483 puppet-xinetd 3.3.0-2.20210528030944.d768da2.el8ost.2 RHEA-2021:3483 puppet-zaqar 15.4.1-2.20210528060419.88b97ec.el8ost.2 RHEA-2021:3483 puppet-zookeeper 0.9.0-2.20210528052531.5877cbf.el8ost.2 RHEA-2021:3483 python-openstackclient-lang 4.0.2-2.20210528091917.54bf2c0.el8ost.1 RHEA-2021:3483 python-oslo-cache-lang 1.37.1-2.20210528100035.3e30378.el8ost.1 RHEA-2021:3483 python-oslo-concurrency-lang 3.30.1-2.20210528084908.f4d2dd8.el8ost.1 RHEA-2021:3483 python-oslo-db-lang 5.0.2-2.20210527233747.fb40cdb.el8ost.1 RHEA-2021:3483 python-oslo-i18n-lang 3.24.0-2.20210527231638.91b39bb.el8ost.1 RHEA-2021:3483 python-oslo-log-lang 3.44.3-2.20210528064856.e19c407.el8ost.1 RHEA-2021:3483 python-oslo-middleware-lang 3.38.1-2.20210527231747.9bae80e.el8ost.1 RHEA-2021:3483 python-oslo-policy-lang 2.3.4-2.20210528073113.5904564.el8ost.1 RHEA-2021:3483 python-oslo-privsep-lang 1.33.5-2.20210528082906.ced0e7b.el8ost.1 RHEA-2021:3483 python-oslo-utils-lang 3.41.6-2.20210528071646.f4deaad.el8ost.1 RHEA-2021:3483 python-oslo-versionedobjects-lang 1.36.1-2.20210528005021.14ee7e0.el8ost.1 RHEA-2021:3483 python-oslo-vmware-lang 2.34.1-2.20210528002910.c592465.el8ost.1 RHEA-2021:3483 python-pycadf-common 2.10.0-2.20210528000650.d113c15.el8ost.1 RHEA-2021:3483 python3-Cython 0.29.2-10.el8ost.1 RHEA-2021:3483 python3-GitPython 2.1.11-8.el8ost.1 RHEA-2021:3483 python3-ImcSdk 0.9.6-7.el8ost.1 RHEA-2021:3483 python3-SecretStorage 2.3.1-9.el8ost RHEA-2021:3483 python3-XStatic 1.0.1-24.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular 1.5.8.0-11.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular-Bootstrap 2.2.0.0-11.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular-FileUpload 12.0.4.0-15.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular-Gettext 2.3.8.0-7.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular-Schema-Form 0.8.13.0-6.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular-UUID 0.0.4.0-10.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular-Vis 4.16.0.0-7.el8ost.1 RHEA-2021:3483 python3-XStatic-Angular-lrdragndrop 1.0.2.2-17.el8ost.1 RHEA-2021:3483 python3-XStatic-Bootstrap-Datepicker 1.3.1.0-17.el8ost.1 RHEA-2021:3483 python3-XStatic-Bootstrap-SCSS 3.3.7.1-11.el8ost.1 RHEA-2021:3483 python3-XStatic-D3 3.5.17.0-11.el8ost.1 RHEA-2021:3483 python3-XStatic-FileSaver 1.3.2.0-7.el8ost.1 RHEA-2021:3483 python3-XStatic-Font-Awesome 4.7.0.0-11.el8ost.1 RHEA-2021:3483 python3-XStatic-Hogan 2.0.0.2-18.el8ost.1 RHEA-2021:3483 python3-XStatic-JQuery-Migrate 1.2.1.1-18.el8ost.1 RHEA-2021:3483 python3-XStatic-JQuery-TableSorter 2.14.5.1-18.el8ost.1 RHEA-2021:3483 python3-XStatic-JQuery-quicksearch 2.0.3.1-18.el8ost.1 RHEA-2021:3483 python3-XStatic-JS-Yaml 3.8.1.0-8.el8ost.1 RHEA-2021:3483 python3-XStatic-JSEncrypt 2.3.1.1-10.el8ost.1 RHEA-2021:3483 python3-XStatic-Jasmine 2.4.1.1-10.el8ost.1 RHEA-2021:3483 python3-XStatic-Json2yaml 0.1.1.0-7.el8ost.1 RHEA-2021:3483 python3-XStatic-Magic-Search 0.2.5.1-13.el8ost.1 RHEA-2021:3483 python3-XStatic-Rickshaw 1.5.0.0-20.el8ost.1 RHEA-2021:3483 python3-XStatic-Spin 1.2.5.2-19.el8ost.1 RHEA-2021:3483 python3-XStatic-bootswatch 3.3.7.0-12.el8ost.1 RHEA-2021:3483 python3-XStatic-jQuery224 2.2.4.1-10.el8ost.2 RHEA-2021:3483 python3-XStatic-jquery-ui 1.12.0.1-10.el8ost.1 RHEA-2021:3483 python3-XStatic-mdi 1.4.57.0-14.el8ost.1 RHEA-2021:3483 python3-XStatic-objectpath 1.2.1.0-7.el8ost.1 RHEA-2021:3483 python3-XStatic-roboto-fontface 0.5.0.0-18.el8ost.1 RHEA-2021:3483 python3-XStatic-smart-table 1.4.13.2-10.el8ost.1 RHEA-2021:3483 python3-XStatic-termjs 0.0.7.0-10.el8ost.1 RHEA-2021:3483 python3-XStatic-tv4 1.2.7.0-6.el8ost.1 RHEA-2021:3483 python3-adal 1.2.0-3.el8ost RHEA-2021:3483 python3-alembic 1.0.7-7.el8ost.1 RHEA-2021:3483 python3-amqp 2.5.2-7.el8ost.1 RHEA-2021:3483 python3-aniso8601 0.82-9.el8ost.1 RHEA-2021:3483 python3-ansible-runner 1.4.0-1.el8ost RHEA-2021:3483 python3-anyjson 0.3.3-13.1.el8ost.1 RHEA-2021:3483 python3-aodh 9.0.1-2.20210528003922.5415b96.el8ost.1 RHEA-2021:3483 python3-aodhclient 1.3.0-2.20210528013035.a8651ec.el8ost.1 RHEA-2021:3483 python3-appdirs 1.4.0-10.el8ost.1 RHEA-2021:3483 python3-atomicwrites 1.3.0-6.el8ost.1 RHEA-2021:3483 python3-autobahn 19.1.1-6.el8ost.1 RHEA-2021:3483 python3-automaton 1.17.0-2.20210528011135.5e82feb.el8ost.1 RHEA-2021:3483 python3-barbican 9.0.2-2.20210528102937.3b66ec1.el8ost.2 RHEA-2021:3483 python3-barbican-tests-tempest 1.2.1-2.20210527204812.ad7f742.el8ost.2 RHEA-2021:3483 python3-barbicanclient 4.9.0-2.20210528003930.9c0e02d.el8ost.1 RHEA-2021:3483 python3-bcrypt 3.1.6-7.el8ost.1 RHEA-2021:3483 python3-beautifulsoup4 4.6.0-6.el8ost.1 RHEA-2021:3483 python3-boto 2.45.0-12.el8ost.1 RHEA-2021:3483 python3-boto3 1.9.101-2.el8ost RHEA-2021:3483 python3-botocore 1.12.119-2.el8ost RHEA-2021:3483 python3-cachetools 3.1.0-3.el8ost RHEA-2021:3483 python3-castellan 1.3.4-2.20210528085905.edf96c4.el8ost.2 RHEA-2021:3483 python3-ceilometer 13.1.3-2.20210608074808.dad2447.el8ost.1 RHEA-2021:3483 python3-ceilometermiddleware 1.5.0-2.20210528034715.fc21cde.el8ost.2 RHEA-2021:3483 python3-certifi 2018.10.15-9.el8ost.1 RHEA-2021:3483 python3-cinder 15.6.1-2.20210528143332.el8ost.3 RHEA-2021:3483 python3-cinder-tests-tempest 1.4.0-2.20210527203802.641d6a0.el8ost.2 RHEA-2021:3483 python3-cinderclient 5.0.2-2.20210528122638.7e9e31c.el8ost.1 RHEA-2021:3483 python3-cinderlib 1.0.1-2.20210528061312.199ebd4.el8ost.1 RHEA-2021:3483 python3-cinderlib-tests-functional 1.0.1-2.20210528061312.199ebd4.el8ost.1 RHEA-2021:3483 python3-cliff 2.16.0-2.20210527234856.6b6b186.el8ost.1 RHEA-2021:3483 python3-cmd2 0.6.8-15.el8ost.1 RHEA-2021:3483 python3-collectd-gnocchi 1.7.2-2.20210527173411.de115a7.el8ost.2 RHEA-2021:3483 python3-collectd-rabbitmq-monitoring 0.0.6-4.el8ost RHEA-2021:3483 python3-colorama 0.4.1-6.el8ost.1 RHEA-2021:3483 python3-construct 2.8.10-7.el8ost.1 RHEA-2021:3483 python3-contextlib2 0.5.5-15.el8ost.1 RHEA-2021:3483 python3-cotyledon 1.7.3-9.el8ost.1 RHEA-2021:3483 python3-cradox 2.1.0-8.el8ost.1 RHEA-2021:3483 python3-croniter 0.3.27-6.el8ost.1 RHEA-2021:3483 python3-crypto 2.6.1-23.el8ost.1 RHEA-2021:3483 python3-cursive 0.2.2-2.20210528012034.d7cea1f.el8ost.1 RHEA-2021:3483 python3-daemon 2.1.2-14.el8ost.1 RHEA-2021:3483 python3-daiquiri 1.5.0-7.el8ost.1 RHEA-2021:3483 python3-dateutil 2.8.0-8.el8ost.1 RHEA-2021:3483 python3-ddt 1.2.0-8.el8ost.1 RHEA-2021:3483 python3-debtcollector 1.22.0-2.20210527225841.0be4911.el8ost.1 RHEA-2021:3483 python3-defusedxml 0.5.0-7.el8ost.1 RHEA-2021:3483 python3-designate 9.0.3-2.20210528102825.f101fab.el8ost.1 RHEA-2021:3483 python3-designate-tests-tempest 0.7.0-2.20210528061953.1096ab9.el8ost.1 RHEA-2021:3483 python3-designateclient 3.0.0-2.20210528014037.093d8d7.el8ost.1 RHEA-2021:3483 python3-dictdiffer 0.7.1-3.el8ost RHEA-2021:3483 python3-django-appconf 1.0.1-11.el8ost.1 RHEA-2021:3483 python3-django-compressor 2.2-9.el8ost.1 RHEA-2021:3483 python3-django-debreach 1.5.2-6.el8ost.1 RHEA-2021:3483 python3-django-horizon 16.2.3-2.20210528135143.2d2f944.el8ost.1 RHEA-2021:3483 python3-django-pyscss 2.0.2-17.el8ost.1 RHEA-2021:3483 python3-django20 2.0.13-16.el8ost.1 RHSA-2021:3490 python3-dogpile-cache 1.1.2-1.1.el8ost.1 RHEA-2021:3483 python3-dracclient 3.4.0-2.20210527195539.21ffd2b.el8ost.1 RHEA-2021:3483 python3-ec2-api 9.0.1-2.20210528091017.f901570.el8ost.2 RHEA-2021:3483 python3-editor 0.4-10.el8ost.1 RHEA-2021:3483 python3-etcd3gw 0.2.5-2.el8ost RHEA-2021:3483 python3-eventlet 0.25.2-5.el8ost.1 RHEA-2021:3483 python3-extras 1.0.0-11.el8ost RHEA-2021:3483 python3-falcon 1.4.1-11.el8ost.1 RHEA-2021:3483 python3-fasteners 0.14.1-20.el8ost.1 RHEA-2021:3483 python3-fixtures 3.0.0-13.el8ost.1 RHEA-2021:3483 python3-flake8 3.5.0-12.el8ost.1 RHEA-2021:3483 python3-flask 1.0.2-7.el8ost.1 RHEA-2021:3483 python3-flask-restful 0.3.6-13.el8ost.1 RHEA-2021:3483 python3-funcsigs 1.0.2-8.el8ost.1 RHEA-2021:3483 python3-future 0.16.0-7.el8ost RHEA-2021:3483 python3-futurist 1.9.0-2.20210527232646.25ffb8f.el8ost.1 RHEA-2021:3483 python3-gabbi 1.42.1-8.el8ost.1 RHEA-2021:3483 python3-gitdb 2.0.3-11.el8ost.1 RHEA-2021:3483 python3-glance 19.0.5-2.20210528112824.b3de9da.el8ost.1 RHEA-2021:3483 python3-glance-store 1.0.2-2.20210528102039.79e043a.el8ost RHEA-2021:3483 python3-glanceclient 2.17.1-2.20210528071101.1aba8f2.el8ost.1 RHEA-2021:3483 python3-gnocchi 4.3.6-2.20210528111817.d57fa67.el8ost RHEA-2021:3483 python3-gnocchiclient 7.0.4-2.20210527235858.64814b9.el8ost.1 RHEA-2021:3483 python3-google-auth 1.3.0-6.el8ost.1 RHEA-2021:3483 python3-greenlet 0.4.14-10.el8ost.1 RHEA-2021:3483 python3-gunicorn 19.9.0-10.el8ost.1 RHEA-2021:3483 python3-hardware 0.23.0-2.20210527194532.59211cc.el8ost.1 RHEA-2021:3483 python3-hardware-detect 0.23.0-2.20210527194532.59211cc.el8ost.1 RHEA-2021:3483 python3-heat-agent 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-ansible 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-apply-config 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-docker-cmd 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-hiera 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-json-file 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-puppet 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-tests-tempest 0.4.0-2.20210527174403.e4d6583.el8ost.1 RHEA-2021:3483 python3-heatclient 1.18.1-2.20210528082653.ed9edc6.el8ost.1 RHEA-2021:3483 python3-horizon-tests-tempest 0.2.0-2.20210527175411.730608b.el8ost.1 RHEA-2021:3483 python3-httplib2 0.13.1-2.el8ost.2 RHEA-2021:3483 python3-importlib-metadata 0.23-11.el8ost.2 RHEA-2021:3483 python3-ironic-inspector-client 3.7.1-2.20210528020511.3a41127.el8ost.1 RHEA-2021:3483 python3-ironic-lib 2.21.3-2.20210603224814.acdc7ad.el8ost.1 RHEA-2021:3483 python3-ironic-neutron-agent 1.4.1-2.20210528062901.d0d3c45.el8ost.1 RHEA-2021:3483 python3-ironic-prometheus-exporter 0.0.1-0.20190712090405.f7e9344.el8ost RHEA-2021:3483 python3-ironic-python-agent 5.0.5-2.20210611024819.el8ost.3 RHEA-2021:3483 python3-ironic-tests-tempest 1.5.1-2.20210527180411.11b8aac.el8ost.1 RHEA-2021:3483 python3-ironicclient 3.1.2-2.20210528013403.1220d76.el8ost.1 RHEA-2021:3483 python3-iso8601 0.1.12-8.el8ost.1 RHEA-2021:3483 python3-json-logger 0.1.7-7.el8ost.1 RHEA-2021:3483 python3-jsonpath-rw 1.2.3-8.el8ost.1 RHEA-2021:3483 python3-jsonpath-rw-ext 1.0.0-7.el8ost.1 RHEA-2021:3483 python3-junitxml 0.7-25.el8ost.1 RHEA-2021:3483 python3-kazoo 2.2.1-7.el8ost.1 RHEA-2021:3483 python3-kerberos 1.2.5-9.el8ost RHEA-2021:3483 python3-keyring 17.1.1-2.el8ost RHEA-2021:3483 python3-keystone 16.0.2-2.20210608194824.c654559.el8ost.1 RHEA-2021:3483 python3-keystone-tests-tempest 0.3.0-2.20210527180412.806103f.el8ost.1 RHEA-2021:3483 python3-keystoneauth1 3.17.4-2.20210609184811.8dc7366.el8ost.1 RHEA-2021:3483 python3-keystoneclient 3.21.0-2.20210527233755.79f150f.el8ost.1 RHEA-2021:3483 python3-keystonemiddleware 7.0.1-2.20210528001005.0a65b14.el8ost.1 RHEA-2021:3483 python3-kombu 4.6.6-7.el8ost.1 RHEA-2021:3483 python3-kubernetes 8.0.0-11.el8ost.1 RHEA-2021:3483 python3-kuryr-tests-tempest 0.5.0-2.20210527181313.2194649.el8ost.1 RHEA-2021:3483 python3-ldap3 2.4.1-10.el8ost.1 RHEA-2021:3483 python3-ldappool 2.4.0-8.el8ost.1 RHEA-2021:3483 python3-lesscpy 0.13.0-14.el8ost.1 RHEA-2021:3483 python3-linecache2 1.0.0-8.el8ost.1 RHEA-2021:3483 python3-lockfile 0.11.0-14.el8ost.1 RHEA-2021:3483 python3-logutils 0.3.5-11.1.el8ost.1 RHEA-2021:3483 python3-lz4 2.1.2-9.el8ost.1 RHEA-2021:3483 python3-magnumclient 2.16.0-2.20210527164515.8106c5f.el8ost.1 RHEA-2021:3483 python3-manila 9.1.6-2.20210603154809.479a8a7.el8ost.1 RHEA-2021:3483 python3-manila-tests-tempest 1.4.0-2.20210527221753.el8ost.2 RHEA-2021:3483 python3-manilaclient 1.29.0-2.20210528015043.1b2cafb.el8ost.1 RHEA-2021:3483 python3-markupsafe 1.1.0-7.el8ost.1 RHEA-2021:3483 python3-mccabe 0.6.1-12.1.el8ost.1 RHEA-2021:3483 python3-memcached 1.58-16.el8ost.1 RHEA-2021:3483 python3-metalsmith 0.15.1-2.20210527172505.0afed74.el8ost.1 RHEA-2021:3483 python3-microversion-parse 0.2.1-2.20210527162101.ae5e3ce.el8ost.1 RHEA-2021:3483 python3-migrate 0.13.0-6.el8ost.1 RHEA-2021:3483 python3-mimeparse 1.6.0-16.el8ost.1 RHEA-2021:3483 python3-mistral 9.1.1-2.20210527192117.62b0b37.el8ost.2 RHEA-2021:3483 python3-mistral-lib 1.2.1-2.20210527170517.4bac2b2.el8ost.1 RHEA-2021:3483 python3-mistral-tests-tempest 0.3.0-2.20210527181411.3c3a6cc.el8ost.1 RHEA-2021:3483 python3-mistralclient 3.10.0-2.20210527162055.dc246bf.el8ost.1 RHEA-2021:3483 python3-mock 3.0.5-12.el8ost.1 RHEA-2021:3483 python3-monotonic 1.5-8.el8ost.1 RHEA-2021:3483 python3-more-itertools 4.1.0-7.el8ost.1 RHEA-2021:3483 python3-mox3 0.28.0-2.20210527230736.0a1e5b9.el8ost.1 RHEA-2021:3483 python3-msgpack 0.6.1-4.el8ost RHEA-2021:3483 python3-munch 2.2.0-8.el8ost.1 RHEA-2021:3483 python3-netifaces 0.10.9-9.el8ost.1 RHEA-2021:3483 python3-network-runner017 0.1.7-4.el8ost.1 RHEA-2021:3483 python3-networking-ansible 3.0.1-2.20210528062320.e24d01c.el8ost.1 RHEA-2021:3483 python3-networking-baremetal 1.4.1-2.20210528062901.d0d3c45.el8ost.1 RHEA-2021:3483 python3-networking-bgpvpn 11.0.2-2.20210527182221.909ade0.el8ost.1 RHEA-2021:3483 python3-networking-bgpvpn-dashboard 11.0.2-2.20210527182221.909ade0.el8ost.1 RHEA-2021:3483 python3-networking-bgpvpn-heat 11.0.2-2.20210527182221.909ade0.el8ost.1 RHEA-2021:3483 python3-networking-bigswitch 15.0.3-2.20210528133247.b53155a.el8ost.1 RHEA-2021:3483 python3-networking-l2gw 15.0.1-2.20210527193417.1f84472.el8ost.1 RHEA-2021:3483 python3-networking-l2gw-tests-tempest 0.1.1-2.20210527182412.a3af33b.el8ost.1 RHEA-2021:3483 python3-networking-ovn 7.4.2-2.20210601204825.el8ost.11 RHEA-2021:3483 python3-networking-ovn-metadata-agent 7.4.2-2.20210601204825.el8ost.11 RHEA-2021:3483 python3-networking-ovn-migration-tool 7.4.2-2.20210601204825.el8ost.11 RHEA-2021:3483 python3-networking-sfc 9.0.2-2.20210528094937.2f75b30.el8ost.1 RHEA-2021:3483 python3-networking-vmware-nsx 15.1.1-2.20210607154811.51d82e5.el8ost.1 RHEA-2021:3483 python3-networkx 1.11-20.el8ost.1 RHEA-2021:3483 python3-networkx-core 1.11-20.el8ost.1 RHEA-2021:3483 python3-neutron 15.3.5-2.20210608154813.el8ost.3 RHSA-2021:3488 python3-neutron-dynamic-routing 15.0.1-2.20210527193223.56de1c4.el8ost.1 RHEA-2021:3483 python3-neutron-lib 1.29.1-2.20210528014406.4ef4b71.el8ost.1 RHEA-2021:3483 python3-neutron-lib-tests 1.29.1-2.20210528014406.4ef4b71.el8ost.1 RHEA-2021:3483 python3-neutron-tests-tempest 0.9.0-2.20210528123644.el8ost.2 RHEA-2021:3483 python3-neutronclient 6.14.1-2.20210528065959.a09e824.el8ost.1 RHEA-2021:3483 python3-nova 20.6.2-2.20210607104828.el8ost.4 RHEA-2021:3483 python3-novaclient 15.1.1-2.20210528065428.79959ab.el8ost.1 RHEA-2021:3483 python3-novajoin 1.3.0-2.20210527183309.265146e.el8ost.1 RHEA-2021:3483 python3-novajoin-tests-tempest 0.0.4-2.20210527210838.b2e5485.el8ost.1 RHEA-2021:3483 python3-numpy 1.17.0-7.el8ost.2 RHEA-2021:3483 python3-numpy-f2py 1.17.0-7.el8ost.2 RHEA-2021:3483 python3-octavia 5.1.2-2.20210607084833.a686bc1.el8ost.1 RHEA-2021:3483 python3-octavia-lib 1.4.0-2.20210528021523.cec8b19.el8ost.1 RHEA-2021:3483 python3-octavia-tests-tempest 1.4.1-2.20210611094817.f7718ef.el8ost.1 RHEA-2021:3483 python3-octavia-tests-tempest-golang 1.4.1-2.20210611094817.f7718ef.el8ost.1 RHEA-2021:3483 python3-octaviaclient 1.10.1-2.20210528070637.b5397ea.el8ost.1 RHEA-2021:3483 python3-openshift 0.8.1-2.el8ost RHEA-2021:3483 python3-openstackclient 4.0.2-2.20210528091917.54bf2c0.el8ost.1 RHEA-2021:3483 python3-openstacksdk 0.36.5-2.20210528093819.feda828.el8ost.1 RHEA-2021:3483 python3-os-brick 2.10.7-2.20210528134947.el8ost.4 RHEA-2021:3483 python3-os-client-config 1.33.0-2.20210527235743.d0eea17.el8ost.1 RHEA-2021:3483 python3-os-ken 0.4.1-2.20210527163303.8f7851a.el8ost.1 RHEA-2021:3483 python3-os-resource-classes 0.5.0-2.20210527171515.0dd643b.el8ost.1 RHEA-2021:3483 python3-os-service-types 1.7.0-2.20210527190446.0b2f473.el8ost.1 RHEA-2021:3483 python3-os-testr 1.1.0-2.20210527154957.414bbf6.el8ost.1 RHEA-2021:3483 python3-os-traits 0.16.0-2.20210527163401.5a477b8.el8ost.1 RHEA-2021:3483 python3-os-vif 1.17.0-2.20210602134810.3a08cc4.el8ost.1 RHEA-2021:3483 python3-os-win 4.3.3-2.20210528072552.3bdedd9.el8ost.1 RHEA-2021:3483 python3-os-xenapi 0.3.4-2.20210528005031.12c68d0.el8ost.1 RHEA-2021:3483 python3-osc-lib 1.14.1-2.20210527161058.a0d9746.el8ost.1 RHEA-2021:3483 python3-osc-placement 1.7.0-2.20210527183422.8bbca01.el8ost.1 RHEA-2021:3483 python3-oslo-cache 1.37.1-2.20210528100035.3e30378.el8ost.1 RHEA-2021:3483 python3-oslo-concurrency 3.30.1-2.20210528084908.f4d2dd8.el8ost.1 RHEA-2021:3483 python3-oslo-config 6.11.3-2.20210528084814.9b1ccea.el8ost.1 RHEA-2021:3483 python3-oslo-context 2.23.1-2.20210528064426.ab17aef.el8ost.1 RHEA-2021:3483 python3-oslo-db 5.0.2-2.20210527233747.fb40cdb.el8ost.1 RHEA-2021:3483 python3-oslo-i18n 3.24.0-2.20210527231638.91b39bb.el8ost.1 RHEA-2021:3483 python3-oslo-log 3.44.3-2.20210528064856.e19c407.el8ost.1 RHEA-2021:3483 python3-oslo-messaging 10.2.4-2.20210528105828.82281a0.el8ost.1 RHEA-2021:3483 python3-oslo-middleware 3.38.1-2.20210527231747.9bae80e.el8ost.1 RHEA-2021:3483 python3-oslo-policy 2.3.4-2.20210528073113.5904564.el8ost.1 RHEA-2021:3483 python3-oslo-privsep 1.33.5-2.20210528082906.ced0e7b.el8ost.1 RHEA-2021:3483 python3-oslo-reports 1.30.0-2.20210528001657.cf35fec.el8ost.1 RHEA-2021:3483 python3-oslo-rootwrap 5.16.1-2.20210528010238.c6babc7.el8ost.1 RHEA-2021:3483 python3-oslo-serialization 2.29.3-2.20210528100828.a9c4bfa.el8ost.1 RHEA-2021:3483 python3-oslo-service 1.40.2-2.20210527232750.a7621c8.el8ost.1 RHEA-2021:3483 python3-oslo-upgradecheck 0.3.2-2.20210527161100.e1df576.el8ost.1 RHEA-2021:3483 python3-oslo-utils 3.41.6-2.20210528071646.f4deaad.el8ost.1 RHEA-2021:3483 python3-oslo-versionedobjects 1.36.1-2.20210528005021.14ee7e0.el8ost.1 RHEA-2021:3483 python3-oslo-vmware 2.34.1-2.20210528002910.c592465.el8ost.1 RHEA-2021:3483 python3-oslotest 3.8.1-2.20210528063324.7ad16de.el8ost.1 RHEA-2021:3483 python3-osprofiler 2.8.2-2.20210528002707.d431c7a.el8ost.1 RHEA-2021:3483 python3-ovirt-engine-sdk4 4.2.9-5.el8ost RHEA-2021:3483 python3-ovsdbapp 0.17.5-2.20210528083704.11ca358.el8ost.1 RHEA-2021:3483 python3-panko 7.0.1-0.20191017041323.9b551e7.el8ost.1 RHEA-2021:3483 python3-pankoclient 0.5.0-0.20191010210259.572aee9.el8ost.1 RHEA-2021:3483 python3-paramiko 2.4.2-7.el8ost.1 RHEA-2021:3483 python3-passlib 1.7.0-10.el8ost.1 RHEA-2021:3483 python3-paste 2.0.3-8.el8ost RHEA-2021:3483 python3-paste-deploy 2.0.1-5.el8ost.1 RHEA-2021:3483 python3-patrole-tests-tempest 0.7.0-2.20210527184205.a5068ba.el8ost.1 RHEA-2021:3483 python3-paunch 5.5.1-2.20210527204730.9b6bef4.el8ost.1 RHEA-2021:3483 python3-paunch-tests 5.5.1-2.20210527204730.9b6bef4.el8ost.1 RHEA-2021:3483 python3-pbr 5.4.3-7.el8ost.1 RHEA-2021:3483 python3-pecan 1.3.2-8.el8ost.1 RHEA-2021:3483 python3-pexpect 4.6-7.el8ost.1 RHEA-2021:3483 python3-pint 0.9-6.el8ost.1 RHEA-2021:3483 python3-placement 2.0.1-2.20210527201546.ff55034.el8ost.1 RHEA-2021:3483 python3-pluggy 0.8.1-7.el8ost.1 RHEA-2021:3483 python3-posix_ipc 0.9.8-25.el8ost.1 RHEA-2021:3483 python3-proliantutils 2.9.1-2.20210527164517.28291c6.el8ost.1 RHEA-2021:3483 python3-prometheus_client 0.6.0-2.el8ost RHEA-2021:3483 python3-protobuf 3.6.1-5.el8ost.1 RHEA-2021:3483 python3-psutil 5.6.3-3.el8ost RHEA-2021:3483 python3-pyasn1 0.4.6-3.el8ost.2 RHEA-2021:3483 python3-pyasn1-modules 0.4.6-3.el8ost.2 RHEA-2021:3483 python3-pycadf 2.10.0-2.20210528000650.d113c15.el8ost.1 RHEA-2021:3483 python3-pycodestyle 2.4.0-8.1.el8ost.1 RHEA-2021:3483 python3-pyeclib 1.6.0-7.el8ost.1 RHEA-2021:3483 python3-pyflakes 2.0.0-12.el8ost.1 RHEA-2021:3483 python3-pyghmi 1.0.22-8.el8ost.1 RHEA-2021:3483 python3-pymemcache 3.4.0-1.el8ost.1 RHEA-2021:3483 python3-pynacl 1.3.0-2.el8ost RHEA-2021:3483 python3-pyngus 2.3.0-2.el8ost RHEA-2021:3483 python3-pyparsing 2.4.2-1.el8ost.1 RHEA-2021:3483 python3-pyrabbit2 1.0.6-3.el8ost RHEA-2021:3483 python3-pyroute2 0.5.6-7.el8ost.1 RHEA-2021:3483 python3-pysaml2 4.6.5-6.el8ost.1 RHEA-2021:3483 python3-pysendfile 2.0.1-20.el8ost.1 RHEA-2021:3483 python3-pysnmp 4.4.8-7.el8ost.1 RHEA-2021:3483 python3-pystache 0.5.3-8.el8ost.1 RHEA-2021:3483 python3-pytest 3.9.1-7.el8ost.1 RHEA-2021:3483 python3-pytimeparse 1.1.5-7.1.el8ost.1 RHEA-2021:3483 python3-pyxattr 0.5.3-20.el8ost RHEA-2021:3483 python3-qpid-proton 0.32.0-2.el8 RHEA-2021:3483 python3-rcssmin 1.0.6-9.el8ost.1 RHEA-2021:3483 python3-redis 3.3.8-6.el8ost.1 RHEA-2021:3483 python3-repoze-lru 0.4-14.el8ost.1 RHEA-2021:3483 python3-requests-kerberos 0.8.0-9.el8ost.1 RHEA-2021:3483 python3-requestsexceptions 1.4.0-2.20210527160003.d7ac0ff.el8ost.1 RHEA-2021:3483 python3-retrying 1.2.3-10.el8ost.1 RHEA-2021:3483 python3-rfc3986 1.2.0-11.el8ost.1 RHEA-2021:3483 python3-rhosp-openvswitch 2.15-4.el8ost.1 RHEA-2021:3483 python3-rjsmin 1.0.12-10.el8ost.1 RHEA-2021:3483 python3-routes 2.4.1-17.el8ost.1 RHEA-2021:3483 python3-rsa 3.4.2-14.el8ost.1 RHEA-2021:3483 python3-rsd-lib 1.1.0-2.20210527184430.6e1ba65.el8ost.1 RHEA-2021:3483 python3-rsdclient 0.2.0-2.20210527185208.c46c7ac.el8ost.1 RHEA-2021:3483 python3-ruamel-yaml 0.15.41-5.el8ost RHEA-2021:3483 python3-s3transfer 0.2.0-1.el8ost.1 RHEA-2021:3483 python3-saharaclient 2.3.0-2.20210528015515.3107b45.el8ost.1 RHEA-2021:3483 python3-scciclient 0.9.1-2.20210527215016.e66d50c.el8ost.1 RHEA-2021:3483 python3-scrypt 0.8.0-9.el8ost.1 RHEA-2021:3483 python3-scss 1.3.7-7.el8ost.1 RHEA-2021:3483 python3-setproctitle 1.1.10-21.el8ost.1 RHEA-2021:3483 python3-shade 1.32.0-2.20210528034851.47fe056.el8ost.1 RHEA-2021:3483 python3-simplegeneric 0.8-13.el8ost.1 RHEA-2021:3483 python3-simplejson 3.16.0-8.el8ost.1 RHEA-2021:3483 python3-six 1.12.0-2.el8ost RHEA-2021:3483 python3-smmap 2.0.3-10.el8ost.1 RHEA-2021:3483 python3-snappy 0.5-15.1.el8ost.1 RHEA-2021:3483 python3-sqlalchemy-collectd 0.0.6-2.el8ost RHEA-2021:3483 python3-sqlalchemy-utils 0.34.2-7.el8ost.1 RHEA-2021:3483 python3-sqlparse 0.2.2-6.2.el8ost RHEA-2021:3483 python3-statsd 3.2.1-11.el8ost.1 RHEA-2021:3483 python3-stestr 2.4.0-1.el8ost RHEA-2021:3483 python3-stevedore 1.31.0-2.20210527225837.6817543.el8ost.1 RHEA-2021:3483 python3-string_utils 0.6.0-5.el8ost RHEA-2021:3483 python3-subunit 1.4.0-6.el8ost.1 RHEA-2021:3483 python3-sushy 2.0.5-2.20210527205913.40df70a.el8ost.1 RHEA-2021:3483 python3-sushy-oem-idrac 0.0.3-2.20210607085843.6478da8.el8ost.1 RHEA-2021:3483 python3-swift 2.23.3-2.20210607090143.81d845f.el8ost.1 RHEA-2021:3483 python3-swiftclient 3.8.1-2.20210527234845.72b90fe.el8ost.1 RHEA-2021:3483 python3-sysv_ipc 0.7.0-11.el8ost.1 RHEA-2021:3483 python3-tap-as-a-service 6.0.1-2.20210527191458.f2b0274.el8ost.1 RHEA-2021:3483 python3-taskflow 3.7.1-2.20210528011244.f0eae2c.el8ost.1 RHEA-2021:3483 python3-telemetry-tests-tempest 0.4.0-2.20210527174508.d60e6e2.el8ost.1 RHEA-2021:3483 python3-tempest 26.1.0-2.20210531080657.271f820.el8ost.1 RHEA-2021:3483 python3-tempest-tests 26.1.0-2.20210531080657.271f820.el8ost.1 RHEA-2021:3483 python3-tempestconf 3.2.1-2.20210527211920.f88961e.el8ost.1 RHEA-2021:3483 python3-tempita 0.5.1-21.el8ost RHEA-2021:3483 python3-tenacity 5.1.1-8.el8ost.1 RHEA-2021:3483 python3-testrepository 0.0.20-25.el8ost.1 RHEA-2021:3483 python3-testscenarios 0.5.0-14.el8ost.1 RHEA-2021:3483 python3-testtools 2.3.0-13.el8ost.1 RHEA-2021:3483 python3-tinyrpc 0.5-9.20170523git1f38ac.el8ost.1 RHEA-2021:3483 python3-tooz 1.66.3-2.20210527191208.13a6dff.el8ost.1 RHEA-2021:3483 python3-traceback2 1.4.0-8.el8ost.1 RHEA-2021:3483 python3-tripleo-common 11.6.1-2.20210603180856.el8ost.3 RHEA-2021:3483 python3-tripleo-common-tests-tempest 0.0.1-0.20200427153420.5bbfb13.el8ost RHEA-2021:3483 python3-tripleoclient 12.5.1-2.20210603180733.95feb7c.el8ost.1 RHEA-2021:3483 python3-tripleoclient-heat-installer 12.5.1-2.20210603180733.95feb7c.el8ost.1 RHEA-2021:3483 python3-trollius 2.1-19.el8ost.1 RHEA-2021:3483 python3-troveclient 3.0.1-2.20210528080751.564edb7.el8ost.1 RHEA-2021:3483 python3-twisted 16.4.1-17.el8ost.1 RHEA-2021:3483 python3-txaio 18.8.1-6.el8ost.1 RHEA-2021:3483 python3-ujson 2.0.3-2.el8ost.1 RHEA-2021:3483 python3-unittest2 1.1.0-23.el8ost.1 RHEA-2021:3483 python3-urllib-gssapi 1.0.1-11.el8ost RHEA-2021:3483 python3-validations-libs 1.1.1-2.20210607091343.04e84c8.el8ost.1 RHEA-2021:3483 python3-versiontools 1.9.1-14.el8ost.1 RHEA-2021:3483 python3-vine 1.3.0-9.el8ost.1 RHEA-2021:3483 python3-vmware-nsxlib 15.1.1-2.20210528104032.3f15e99.el8ost.1 RHEA-2021:3483 python3-voluptuous 0.11.7-7.el8ost.1 RHEA-2021:3483 python3-waitress 1.4.2-2.el8ost RHEA-2021:3483 python3-warlock 1.3.0-14.el8ost.1 RHEA-2021:3483 python3-webob 1.8.5-6.el8ost RHEA-2021:3483 python3-websocket-client 0.54.0-2.el8ost RHEA-2021:3483 python3-websockify 0.9.0-1.el8ost.1 RHEA-2021:3483 python3-webtest 2.0.33-5.el8ost RHEA-2021:3483 python3-werkzeug 0.14.1-10.el8ost.1 RHEA-2021:3483 python3-wrapt 1.11.2-5.el8ost RHEA-2021:3483 python3-wsaccel 0.6.2-15.el8ost.1 RHEA-2021:3483 python3-wsgi_intercept 1.2.2-7.el8ost.1 RHEA-2021:3483 python3-wsme 0.9.4-2.20210528002015.bff9624.el8ost.1 RHEA-2021:3483 python3-yappi 1.0-7.el8ost RHEA-2021:3483 python3-yaql 1.1.3-8.el8ost.1 RHEA-2021:3483 python3-zake 0.2.2-19.el8ost.1 RHEA-2021:3483 python3-zaqar-tests-tempest 0.3.0-2.20210527190208.f6211b4.el8ost.1 RHEA-2021:3483 python3-zaqarclient 1.12.0-2.20210528010027.9038bf6.el8ost.1 RHEA-2021:3483 python3-zeroconf 0.19.1-6.el8ost RHEA-2021:3483 python3-zipp 0.5.1-3.el8ost RHEA-2021:3483 python3-zope-event 4.2.0-14.2.el8ost RHEA-2021:3483 python3-zope-interface 4.4.3-3.el8ost RHEA-2021:3483 qpid-dispatch-router 1.8.0-2.el8 RHEA-2021:3483 qpid-dispatch-tools 1.8.0-2.el8 RHEA-2021:3483 qpid-proton-c 0.32.0-2.el8 RHEA-2021:3483 qpid-proton-c-devel 0.32.0-2.el8 RHEA-2021:3483 rabbitmq-server 3.8.16-2.el8ost.1 RHEA-2021:3483 rhosp-director-images 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-all 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-base 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-ipa-ppc64le 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-ipa-x86_64 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-metadata 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-minimal 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-ppc64le 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-x86_64 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-network-scripts-openvswitch 2.15-4.el8ost.1 RHEA-2021:3483 rhosp-openvswitch 2.15-4.el8ost.1 RHEA-2021:3483 rhosp-ovn-2021-4.el8ost.1.noarch.rpm 2.15-4.el8ost.1 RHEA-2021:3483 rhosp-ovn-central-2021-4.el8ost.1.noarch.rpm 2.15-4.el8ost.1 RHEA-2021:3483 rhosp-ovn-host-2021-4.el8ost.1.noarch.rpm 2.15-4.el8ost.1 RHEA-2021:3483 rhosp-ovn-vtep-2021-4.el8ost.1.noarch.rpm 2.15-4.el8ost.1 RHEA-2021:3483 rhosp-release 16.2.0-3.el8ost.1 RHEA-2021:3483 roboto-fontface-common 0.5.0.0-18.el8ost.1 RHEA-2021:3483 roboto-fontface-fonts 0.5.0.0-18.el8ost.1 RHEA-2021:3483 ruby-augeas 0.5.0-8.el8ost.1 RHEA-2021:3483 ruby-facter 3.9.3-14.el8ost.1 RHEA-2021:3483 ruby-shadow 2.5.0-7.el8ost.1 RHEA-2021:3483 rubygem-pathspec 0.2.1-10.el8ost RHEA-2021:3483 rubygem-rgen 0.6.6-7.1.el8ost.1 RHEA-2021:3483 subunit-filters 1.4.0-6.el8ost.1 RHEA-2021:3483 sysbench 0.4.12-19.el8ost.2 RHEA-2021:3483 tripleo-ansible 0.7.1-2.20210603175840.el8ost.8 RHEA-2021:3483 validations-common 1.1.2-2.20210611010116.el8ost.2 RHEA-2021:3483 web-assets-filesystem 5-12.el8ost.1 RHEA-2021:3483 web-assets-httpd 5-12.el8ost.1 RHEA-2021:3483 xstatic-angular-bootstrap-common 2.2.0.0-11.el8ost.1 RHEA-2021:3483 xstatic-angular-fileupload-common 12.0.4.0-15.el8ost.1 RHEA-2021:3483 xstatic-angular-gettext-common 2.3.8.0-7.el8ost.1 RHEA-2021:3483 xstatic-angular-lrdragndrop-common 1.0.2.2-17.el8ost.1 RHEA-2021:3483 xstatic-angular-schema-form-common 0.8.13.0-6.el8ost.1 RHEA-2021:3483 xstatic-angular-uuid-common 0.0.4.0-10.el8ost.1 RHEA-2021:3483 xstatic-angular-vis-common 4.16.0.0-7.el8ost.1 RHEA-2021:3483 xstatic-bootstrap-datepicker-common 1.3.1.0-17.el8ost.1 RHEA-2021:3483 xstatic-bootstrap-scss-common 3.3.7.1-11.el8ost.1 RHEA-2021:3483 xstatic-d3-common 3.5.17.0-11.el8ost.1 RHEA-2021:3483 xstatic-filesaver-common 1.3.2.0-7.el8ost.1 RHEA-2021:3483 xstatic-hogan-common 2.0.0.2-18.el8ost.1 RHEA-2021:3483 xstatic-jasmine-common 2.4.1.1-10.el8ost.1 RHEA-2021:3483 xstatic-jquery-migrate-common 1.2.1.1-18.el8ost.1 RHEA-2021:3483 xstatic-jquery-quicksearch-common 2.0.3.1-18.el8ost.1 RHEA-2021:3483 xstatic-jquery-tablesorter-common 2.14.5.1-18.el8ost.1 RHEA-2021:3483 xstatic-jquery-ui-common 1.12.0.1-10.el8ost.1 RHEA-2021:3483 xstatic-js-yaml-common 3.8.1.0-8.el8ost.1 RHEA-2021:3483 xstatic-jsencrypt-common 2.3.1.1-10.el8ost.1 RHEA-2021:3483 xstatic-json2yaml-common 0.1.1.0-7.el8ost.1 RHEA-2021:3483 xstatic-objectpath-common 1.2.1.0-7.el8ost.1 RHEA-2021:3483 xstatic-rickshaw-common 1.5.0.0-20.el8ost.1 RHEA-2021:3483 xstatic-smart-table-common 1.4.13.2-10.el8ost.1 RHEA-2021:3483 xstatic-spin-common 1.2.5.2-19.el8ost.1 RHEA-2021:3483 xstatic-termjs-common 0.0.7.0-10.el8ost.1 RHEA-2021:3483 xstatic-tv4-common 1.2.7.0-6.el8ost.1 RHEA-2021:3483 yaml-cpp 0.6.1-13.el8ost.1 RHEA-2021:3483 | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/package_manifest/openstack-16.2-for-rhel-8-x86_64-rpms_2021-09-15 |
20.2. Installing in an LPAR | 20.2. Installing in an LPAR When installing in a logical partition (LPAR), you can boot from: an FTP server the DVD drive of the HMC or SE a DASD or an FCP-attached SCSI drive prepared with the zipl boot loader an FCP-attached SCSI DVD drive Perform these common steps first: Log in on the IBM System z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. The SYSPROG user is recommended. Select Images , then select the LPAR to which you wish to install. Use the arrows in the frame on the right side to navigate to the CPC Recovery menu. Double-click Operating System Messages to show the text console on which Linux boot messages will appear and potentially user input will be required. Refer to the chapter on booting Linux in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 and the Hardware Management Console Operations Guide , order number [ SC28-6857 ], for details. Continue with the procedure for your installation source. 20.2.1. Using an FTP Server Double-click Load from CD-ROM, DVD, or Server . In the dialog box that follows, select FTP Source , and enter the following information: Host Computer: Hostname or IP address of the FTP server you wish to install from (for example, ftp.redhat.com) User ID: Your user name on the FTP server (or anonymous) Password: Your password (use your email address if you are logging in as anonymous) Account (optional): Leave this field empty File location (optional): Directory on the FTP server holding Red Hat Enterprise Linux for System z (for example, /rhel/s390x/) Click Continue . In the dialog that follows, keep the default selection of generic.ins and click Continue . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-s390-steps-boot-Installing_in_an_LPAR |
Chapter 2. Migrating Camel Routes from Fuse 7 to Red Hat build of Apache Camel for Quarkus | Chapter 2. Migrating Camel Routes from Fuse 7 to Red Hat build of Apache Camel for Quarkus Note You can define Camel routes in Red Hat build of Apache Camel for Quarkus applications using Java DSL, XML IO DSL, or YAML. 2.1. Java DSL route migration example To migrate a Java DSL route definition from your Fuse application to CEQ, you can copy your existing route definition directly to your Red Hat build of Apache Camel for Quarkus application and add the necessary dependencies to your Red Hat build of Apache Camel for Quarkus pom.xml file. In this example, we will migrate a content-based route definition from a Fuse 7 application to a new CEQ application by copying the Java DSL route to a file named Routes.java in your CEQ application. Procedure Using the code.quarkus.redhat.com website, select the extensions required for this example: camel-quarkus-file camel-quarkus-xpath Navigate to the directory where you extracted the generated project files from the step: USD cd <directory_name> Create a file named Routes.java in the src/main/java/org/acme/ subfolder. Add the route definition from your Fuse application to the Routes.java , similar to the following example: package org.acme; import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { // Add your Java DSL route definition here public void configure() { from("file:work/cbr/input") .log("Receiving order USD{file:name}") .choice() .when().xpath("//order/customer/country[text() = 'UK']") .log("Sending order USD{file:name} to the UK") .to("file:work/cbr/output/uk") .when().xpath("//order/customer/country[text() = 'US']") .log("Sending order USD{file:name} to the US") .to("file:work/cbr/output/uk") .otherwise() .log("Sending order USD{file:name} to another country") .to("file:work/cbr/output/others"); } } Compile your CEQ application. mvn clean compile quarkus:dev Note This command compiles the project, starts your application, and lets the Quarkus tooling watch for changes in your workspace. Any modifications in your project will automatically take effect in the running application. 2.2. Blueprint XML DSL route migration To migrate a Blueprint XML route definition from your Fuse application to CEQ, use the camel-quarkus-xml-io-dsl extension and copy your Fuse application route definition directly to your Red Hat build of Apache Camel for Quarkus application. You will then need to add the necessary dependencies to the Red Hat build of Apache Camel for Quarkus pom.xml file and update your Red Hat build of Apache Camel for Quarkus configuration in the application.properties file. Note Red Hat build of Apache Camel for Quarkus supports Camel version 4, but Fuse 7 supports Camel version 2. For more information relating to upgrading Camel when you migrate your Red Hat Fuse 7 application to CEQ, see: Migrating to Apache Camel 3 Migrating to Apache Camel 4 For more information about using beans in Camel Quarkus, see the CDI and the Camel Bean Component section in the Developing Applications with Red Hat build of Apache Camel for Quarkus guide. 2.2.1. XML-IO-DSL limitations You can use the camel-quarkus-xml-io-dsl extension to assist with migrating a Blueprint XML route definition to CEQ. The camel-quarkus-xml-io-dsl extension only supports the following <camelContext> sub-elements: routeTemplates templatedRoutes rests routes routeConfigurations Note As Blueprint XML supports other bean definitions that are not supported by the camel-quarkus-xml-io-dsl extension, you may need to rewrite other bean definitions that are included in your Blueprint XML route definition. You must define every element (XML IO DSL) in a separate file. For example, this is a simplified example of a Blueprint XML route definition: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <restConfiguration contextPath="/camel" /> <rest path="/books"> <get uri="/"> <to ..../> </get> </rest> <route> <from ..../> </route> </camelContext> </blueprint> You can migrate this Blueprint XML route definition to CEQ using XML IO DSL as defined in the following files: src/main/resources/routes/camel-rests.xml <rests xmlns="http://camel.apache.org/schema/spring"> <rest path="/books"> <get path="/"> <to ..../> </get> </rest> </rests> src/main/resources/routes/camel-routes.xml <routes xmlns="http://camel.apache.org/schema/spring"> <route> <from ..../> </route> </routes> You must use Java DSL to define other elements which are not supported, such as <restConfiguration> . For example, using a route builder defined in a camel-rests.xml file as follows: src/main/resources/routes/camel-rests.xml import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { public void configure() { restConfiguration() .contextPath("/camel"); } } 2.2.2. Blueprint XML DSL route migration example Note For more information about using the XML IO DSL extension, see the XML IO DSL documentation in the Red Hat build of Apache Camel for Quarkus Reference. In this example, you are migrating a content-based route definition from a Fuse application to a new CEQ application by copying the Blueprint XML route definition to a file named camel-routes.xml in your CEQ application. Procedure Using the code.quarkus.redhat.com website, select the following extensions for this example: camel-quarkus-xml-io-dsl camel-quarkus-file camel-quarkus-xpath Select Generate your application to confirm your choices and display the overlay screen with the download link for the archive that contains your generated project. Select Download the ZIP to save the archive with the generated project files to your machine. Extract the contents of the archive. Navigate to the directory where you extracted the generated project files from the step: USD cd <directory_name> Create a file named camel-routes.xml in the src/main/resources/routes/ directory. Copy the <route> element and sub-elements from the following blueprint-example.xml example to the camel-routes.xml file: blueprint-example.xml <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <camelContext id="cbr-example-context" xmlns="http://camel.apache.org/schema/blueprint"> <route id="cbr-route"> <from id="_from1" uri="file:work/cbr/input"/> <log id="_log1" message="Receiving order USD{file:name}"/> <choice id="_choice1"> <when id="_when1"> <xpath id="_xpath1">/order/customer/country = 'UK'</xpath> <log id="_log2" message="Sending order USD{file:name} to the UK"/> <to id="_to1" uri="file:work/cbr/output/uk"/> </when> <when id="_when2"> <xpath id="_xpath2">/order/customer/country = 'US'</xpath> <log id="_log3" message="Sending order USD{file:name} to the US"/> <to id="_to2" uri="file:work/cbr/output/us"/> </when> <otherwise id="_otherwise1"> <log id="_log4" message="Sending order USD{file:name} to another country"/> <to id="_to3" uri="file:work/cbr/output/others"/> </otherwise> </choice> <log id="_log5" message="Done processing USD{file:name}"/> </route> </camelContext> </blueprint> camel-routes.xml <route id="cbr-route"> <from id="_from1" uri="file:work/cbr/input"/> <log id="_log1" message="Receiving order USD{file:name}"/> <choice id="_choice1"> <when id="_when1"> <xpath id="_xpath1">/order/customer/country = 'UK'</xpath> <log id="_log2" message="Sending order USD{file:name} to the UK"/> <to id="_to1" uri="file:work/cbr/output/uk"/> </when> <when id="_when2"> <xpath id="_xpath2">/order/customer/country = 'US'</xpath> <log id="_log3" message="Sending order USD{file:name} to the US"/> <to id="_to2" uri="file:work/cbr/output/us"/> </when> <otherwise id="_otherwise1"> <log id="_log4" message="Sending order USD{file:name} to another country"/> <to id="_to3" uri="file:work/cbr/output/others"/> </otherwise> </choice> <log id="_log5" message="Done processing USD{file:name}"/> </route> Modify application.properties # Camel # camel.context.name = camel-quarkus-xml-io-dsl-example camel.main.routes-include-pattern = file:src/main/resources/routes/camel-routes.xml Compile your CEQ application. mvn clean compile quarkus:dev Note This command compiles the project, starts your application, and lets the Quarkus tooling watch for changes in your workspace. Any modifications in your project will automatically take effect in the running application. | [
"cd <directory_name>",
"package org.acme; import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { // Add your Java DSL route definition here public void configure() { from(\"file:work/cbr/input\") .log(\"Receiving order USD{file:name}\") .choice() .when().xpath(\"//order/customer/country[text() = 'UK']\") .log(\"Sending order USD{file:name} to the UK\") .to(\"file:work/cbr/output/uk\") .when().xpath(\"//order/customer/country[text() = 'US']\") .log(\"Sending order USD{file:name} to the US\") .to(\"file:work/cbr/output/uk\") .otherwise() .log(\"Sending order USD{file:name} to another country\") .to(\"file:work/cbr/output/others\"); } }",
"mvn clean compile quarkus:dev",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <restConfiguration contextPath=\"/camel\" /> <rest path=\"/books\"> <get uri=\"/\"> <to ..../> </get> </rest> <route> <from ..../> </route> </camelContext> </blueprint>",
"<rests xmlns=\"http://camel.apache.org/schema/spring\"> <rest path=\"/books\"> <get path=\"/\"> <to ..../> </get> </rest> </rests>",
"<routes xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from ..../> </route> </routes>",
"import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { public void configure() { restConfiguration() .contextPath(\"/camel\"); } }",
"cd <directory_name>",
"<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <camelContext id=\"cbr-example-context\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <route id=\"cbr-route\"> <from id=\"_from1\" uri=\"file:work/cbr/input\"/> <log id=\"_log1\" message=\"Receiving order USD{file:name}\"/> <choice id=\"_choice1\"> <when id=\"_when1\"> <xpath id=\"_xpath1\">/order/customer/country = 'UK'</xpath> <log id=\"_log2\" message=\"Sending order USD{file:name} to the UK\"/> <to id=\"_to1\" uri=\"file:work/cbr/output/uk\"/> </when> <when id=\"_when2\"> <xpath id=\"_xpath2\">/order/customer/country = 'US'</xpath> <log id=\"_log3\" message=\"Sending order USD{file:name} to the US\"/> <to id=\"_to2\" uri=\"file:work/cbr/output/us\"/> </when> <otherwise id=\"_otherwise1\"> <log id=\"_log4\" message=\"Sending order USD{file:name} to another country\"/> <to id=\"_to3\" uri=\"file:work/cbr/output/others\"/> </otherwise> </choice> <log id=\"_log5\" message=\"Done processing USD{file:name}\"/> </route> </camelContext> </blueprint>",
"<route id=\"cbr-route\"> <from id=\"_from1\" uri=\"file:work/cbr/input\"/> <log id=\"_log1\" message=\"Receiving order USD{file:name}\"/> <choice id=\"_choice1\"> <when id=\"_when1\"> <xpath id=\"_xpath1\">/order/customer/country = 'UK'</xpath> <log id=\"_log2\" message=\"Sending order USD{file:name} to the UK\"/> <to id=\"_to1\" uri=\"file:work/cbr/output/uk\"/> </when> <when id=\"_when2\"> <xpath id=\"_xpath2\">/order/customer/country = 'US'</xpath> <log id=\"_log3\" message=\"Sending order USD{file:name} to the US\"/> <to id=\"_to2\" uri=\"file:work/cbr/output/us\"/> </when> <otherwise id=\"_otherwise1\"> <log id=\"_log4\" message=\"Sending order USD{file:name} to another country\"/> <to id=\"_to3\" uri=\"file:work/cbr/output/others\"/> </otherwise> </choice> <log id=\"_log5\" message=\"Done processing USD{file:name}\"/> </route>",
"Camel # camel.context.name = camel-quarkus-xml-io-dsl-example camel.main.routes-include-pattern = file:src/main/resources/routes/camel-routes.xml",
"mvn clean compile quarkus:dev"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/migrating_fuse_7_applications_to_red_hat_build_of_apache_camel_for_quarkus/migrating_camel_routes_from_fuse_7_to_red_hat_build_of_apache_camel_for_quarkus |
9.11. Using Kerberos GSS-API with SASL | 9.11. Using Kerberos GSS-API with SASL Kerberos v5 must be deployed on the host for Directory Server to utilize the GSS-API mechanism for SASL authentication. GSS-API and Kerberos client libraries must be installed on the Directory Server host to take advantage of Kerberos services. 9.11.1. Authentication Mechanisms for SASL in Directory Server Directory Server support the following SASL encryption mechanisms: PLAIN. PLAIN sends cleartext passwords for simple password-based authentication. EXTERNAL. EXTERNAL, as with TLS, performs certificate-based authentication. This method uses public keys for strong authentication. CRAM-MD5. CRAM-MD5 is a weak, simple challenge-response authentication method. It does not establish any security layer. Warning Red Hat recommends not using the insecure CRAM-MD5 mechanism. DIGEST-MD5. DIGEST-MD5 is a weak authentication method for LDAPv3 servers. Warning Red Hat recommends not using the insecure DIGGEST-MD5 mechanism. Generic Security Services (GSS-API). Generic Security Services (GSS) is a security API that is the native way for UNIX-based operating systems to access and authenticate Kerberos services. GSS-API also supports session encryption, similar to TLS. This allows LDAP clients to authenticate with the server using Kerberos version 5 credentials (tickets) and to use network session encryption. For Directory Server to use GSS-API, Kerberos must be configured on the host machine. See Section 9.11, "Using Kerberos GSS-API with SASL" . Note GSS-API and, thus, Kerberos are only supported on platforms that have GSS-API support. To use GSS-API, it may be necessary to install the Kerberos client libraries; any required Kerberos libraries will be available through the operating system vendor. 9.11.2. About Kerberos in Directory Server On Red Hat Enterprise Linux, the supported Kerberos libraries are MIT Kerberos version 5. The concepts of Kerberos, as well as using and configuring Kerberos, are covered at the MIT Kerberos website, http://web.mit.edu/Kerberos/ . 9.11.2.1. About Principals and Realms A principal is a user or service in the Kerberos environment. A realm defines what Kerberos manages in terms of who can access what. The client, the KDC, and the host or service you want to access must use the same realm. Note Kerberos realms are only supported for GSS-API authentication and encryption, not for DIGEST-MD5. Realms are used by the server to associate the DN of the client in the following form, which looks like an LDAP DN: For example, Mike Connors in the engineering realm of the European division of example.com uses the following association to access a server in the US realm: Babara Jensen, from the accounting realm of US.example.com , does not have to specify a realm when to access a local server: If realms are supported by the mechanism and the default realm is not used to authenticate to the server, then the realm must be specified in the Kerberos principal. Otherwise, the realm can be omitted. Note Kerberos systems treat the Kerberos realm as the default realm; other systems default to the server. 9.11.2.2. About the KDC Server and Keytabs The Key Distribution Center (KDC) authenticates users and issues Ticket Granting Tickets (TGT) for them. This enables users to authenticate to Directory Server using GSS-API. To respond to Kerberos operations, Directory Server requires access to its keytab file. The keytab contains the cryptographic key that Directory Server uses to authenticate to other servers. Directory Server uses the ldap service name in a Kerberos principal. For example: For details about creating the keytab, see your Kerberos documentation. Note You must create a Simple Authentication and Security Layer (SASL) mapping for the Directory Server Kerberos principal that maps to an existing entry Distinguished Name (DN). 9.11.3. Configuring SASL Authentication at Directory Server Startup SASL GSS-API authentication has to be activated in Directory Server so that Kerberos tickets can be used for authentication. This is done by supplying a system configuration file for the init scripts to use which identifies the variable to set the keytab file location. When the init script runs at Directory Server startup, SASL authentication is then immediately active. The default SASL configuration is stored in the /etc/sysconfig/dirsrv file. If there are multiple Directory Server instances and not all of them will use SASL authentication, then there can be instance-specific configuration files created in the /etc/sysconfig/ directory named dirsrv- instance . For example, dirsrv-example . The default dirsrv file can be used if there is a single instance on a host. To enable SASL authentication, uncomment the KRB5_KTNAME line in the /etc/sysconfig/dirsrv (or instance-specific) file, and set the keytab location for the KRB5_KTNAME variable. For example: | [
"uid= user_name /[ server_instance ],cn= realm ,cn= mechanism ,cn=auth",
"uid=mconnors/cn=Europe.example.com,cn=engineering,cn=gssapi,cn=auth",
"uid=bjensen,cn=accounting,cn=gssapi,cn=auth",
"ldap/ server.example.com @ EXAMPLE.COM",
"In order to use SASL/GSSAPI the directory server needs to know where to find its keytab file - uncomment the following line and set the path and filename appropriately KRB5_KTNAME= /etc/dirsrv/krb5.keytab"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Configuring_Kerberos |
Chapter 4. Environment variables | Chapter 4. Environment variables Red Hat Quay supports a limited number of environment variables for dynamic configuration. 4.1. Geo-replication The same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Table 4.1. Geo-replication configuration Variable Type Description QUAY_DISTRIBUTED_STORAGE_PREFERENCE String The preferred storage engine (by ID in DISTRIBUTED_STORAGE_CONFIG) to use. 4.2. Database connection pooling Red Hat Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database. Database connection pooling is enabled by default, and each process that interacts with the database contains a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Red Hat Quay container. Under certain deployments and loads, this might require analysis to ensure that Red Hat Quay does not exceed the configured database's maximum connection count. Overtime, the connection pools release idle connections. To release all connections immediately, Red Hat Quay requires a restart. For standalone Red Hat Quay deployments, database connection pooling can be toggled off when starting your deployment. For example: USD sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v USDQUAY/config:/conf/stack:Z \ -v USDQUAY/storage:/datastorage:Z \ -e DB_CONNECTION_POOLING=false registry.redhat.io/quay/quay-rhel8:v3.12.1 For Red Hat Quay on OpenShift Container Platform, database connection pooling can be configured by modifying the QuayRegistry custom resource definition (CRD). For example: Example QuayRegistry CRD spec: components: - kind: quay managed: true overrides: env: - name: DB_CONNECTION_POOLING value: "false" Table 4.2. Database connection pooling configuration Variable Type Description DB_CONNECTION_POOLING String Whether to enable or disable database connection pooling. Defaults to true. Accepted values are "true" or "false" If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml option: config.yaml ... DB_CONNECTION_ARGS: max_connections: 10 ... 4.3. HTTP connection counts It is possible to specify the quantity of simultaneous HTTP connections using environment variables. These can be specified as a whole, or for a specific component. The default for each is 50 parallel connections per process. Table 4.3. HTTP connection counts configuration Variable Type Description WORKER_CONNECTION_COUNT Number Simultaneous HTTP connections Default: 50 WORKER_CONNECTION_COUNT_REGISTRY Number Simultaneous HTTP connections for registry Default: WORKER_CONNECTION_COUNT WORKER_CONNECTION_COUNT_WEB Number Simultaneous HTTP connections for web UI Default: WORKER_CONNECTION_COUNT WORKER_CONNECTION_COUNT_SECSCAN Number Simultaneous HTTP connections for Clair Default: WORKER_CONNECTION_COUNT 4.4. Worker count variables Table 4.4. Worker count variables Variable Type Description WORKER_COUNT Number Generic override for number of processes WORKER_COUNT_REGISTRY Number Specifies the number of processes to handle Registry requests within the Quay container Values: Integer between 8 and 64 WORKER_COUNT_WEB Number Specifies the number of processes to handle UI/Web requests within the container Values: Integer between 2 and 32 WORKER_COUNT_SECSCAN Number Specifies the number of processes to handle Security Scanning (e.g. Clair) integration within the container Values: Integer. Because the Operator specifies 2 vCPUs for resource requests and limits, setting this value between 2 and 4 is safe. However, users can run more, for example, 16 , if warranted. 4.5. Debug variables The following debug variables are available on Red Hat Quay. Table 4.5. Debug configuration variables Variable Type Description DEBUGLOG Boolean Whether to enable or disable debug logs. USERS_DEBUG Integer. Either 0 or 1 . Used to debug LDAP operations in clear text, including passwords. Must be used with DEBUGLOG=TRUE . Important Setting USERS_DEBUG=1 exposes credentials in clear text. This variable should be removed from the Red Hat Quay deployment after debugging. The log file that is generated with this environment variable should be scrutinized, and passwords should be removed before sending to other users. Use with caution. | [
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z -e DB_CONNECTION_POOLING=false registry.redhat.io/quay/quay-rhel8:v3.12.1",
"spec: components: - kind: quay managed: true overrides: env: - name: DB_CONNECTION_POOLING value: \"false\"",
"DB_CONNECTION_ARGS: max_connections: 10"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/configure_red_hat_quay/config-envar-intro |
1.3. Available Services | 1.3. Available Services All Red Hat Enterprise Linux systems have some services already available to configure authentication for local users on local systems. These include: Authentication Setup The Authentication Configuration tool ( authconfig ) sets up different identity back ends and means of authentication (such as passwords, fingerprints, or smart cards) for the system. Identity Back End Setup The Security System Services Daemon (SSSD) sets up multiple identity providers (primarily LDAP-based directories such as Microsoft Active Directory or Red Hat Enterprise Linux IdM) which can then be used by both the local system and applications for users. Passwords and tickets are cached, allowing both offline authentication and single sign-on by reusing credentials. The realmd service is a command-line utility that allows you to configure an authentication back end, which is SSSD for IdM. The realmd service detects available IdM domains based on the DNS records, configures SSSD, and then joins the system as an account to a domain. Name Service Switch (NSS) is a mechanism for low-level system calls that return information about users, groups, or hosts. NSS determines what source, that is, which modules, should be used to obtain the required information. For example, user information can be located in traditional UNIX files, such as the /etc/passwd file, or in LDAP-based directories, while host addresses can be read from files, such as the /etc/hosts file, or the DNS records; NSS locates where the information is stored. Authentication Mechanisms Pluggable Authentication Modules (PAM) provide a system to set up authentication policies. An application using PAM for authentication loads different modules that control different aspects of authentication; which PAM module an application uses is based on how the application is configured. The available PAM modules include Kerberos, Winbind, or local UNIX file-based authentication. Other services and applications are also available, but these are common ones. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/default-options |
Chapter 5. OpenShift Data Foundation upgrade overview | Chapter 5. OpenShift Data Foundation upgrade overview As an operator bundle managed by the Operator Lifecycle Manager (OLM), OpenShift Data Foundation leverages its operators to perform high-level tasks of installing and upgrading the product through ClusterServiceVersion (CSV) CRs. 5.1. Upgrade Workflows OpenShift Data Foundation recognizes two types of upgrades: Z-stream release upgrades and Minor Version release upgrades. While the user interface workflows for these two upgrade paths are not quite the same, the resulting behaviors are fairly similar. The distinctions are as follows: For Z-stream releases, OCS will publish a new bundle in the redhat-operators CatalogSource . The OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. The Subscription approval strategy, whether Automatic or Manual, will determine whether the OLM proceeds with reconciliation or waits for administrator approval. For Minor Version releases, OpenShift Container Storage will also publish a new bundle in the redhat-operators CatalogSource . The difference is that this bundle will be part of a new channel, and channel upgrades are not automatic. The administrator must explicitly select the new release channel. Once this is done, the OLM will detect this and create an InstallPlan for the new CSV to replace the existing CSV. Since the channel switch is a manual operation, OLM will automatically start the reconciliation. From this point onwards, the upgrade processes are identical. 5.2. ClusterServiceVersion Reconciliation When the OLM detects an approved InstallPlan , it begins the process of reconciling the CSVs. Broadly, it does this by updating the operator resources based on the new spec, verifying the new CSV installs correctly, then deleting the old CSV. The upgrade process will push updates to the operator Deployments, which will trigger the restart of the operator Pods using the images specified in the new CSV. Note While it is possible to make changes to a given CSV and have those changes propagate to the relevant resource, when upgrading to a new CSV all custom changes will be lost, as the new CSV will be created based on its unaltered spec. 5.3. Operator Reconciliation At this point, the reconciliation of the OpenShift Data Foundation operands proceeds as defined in the OpenShift Data Foundation installation overview . The operators will ensure that all relevant resources exist in their expected configurations as specified in the user-facing resources (for example, StorageCluster ). | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_upgrade_overview |
Chapter 11. Intercepting Messages | Chapter 11. Intercepting Messages With AMQ Broker you can intercept packets entering or exiting the broker, allowing you to audit packets or filter messages. Interceptors can change the packets they intercept, which makes them powerful, but also potentially dangerous. You can develop interceptors to meet your business requirements. Interceptors are protocol specific and must implement the appropriate interface. Interceptors must implement the intercept() method, which returns a boolean value. If the value is true , the message packet continues onward. If false , the process is aborted, no other interceptors are called, and the message packet is not processed further. 11.1. Creating Interceptors You can create your own incoming and outgoing interceptors. All interceptors are protocol specific and are called for any packet entering or exiting the server respectively. This allows you to create interceptors to meet business requirements such as auditing packets. Interceptors can change the packets they intercept. This makes them powerful as well as potentially dangerous, so be sure to use them with caution. Interceptors and their dependencies must be placed in the Java classpath of the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure The following examples demonstrate how to create an interceptor that checks the size of each packet passed to it. Note that the examples implement a specific interface for each protocol. Implement the appropriate interface and override its intercept() method. If you are using the AMQP protocol, implement the org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor interface. package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This AMQPMessage has an acceptable size."); return true; } return false; } } If you are using Core Protocol, your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } If you are using the MQTT protocol, implement the org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println("This MqttMessage has an acceptable size."); return true; } return false; } } If you are using the STOMP protocol, implement the org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This StompFrame has an acceptable size."); return true; } return false; } } 11.2. Configuring the Broker to Use Interceptors Once you have created an interceptor, you must configure the broker to use it. Prerequisites You must create an interceptor class and add it (and its dependencies) to the Java classpath of the broker before you can configure it for use by the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure Configure the broker to use an interceptor by adding configuration to <broker_instance_dir> /etc/broker.xml If your interceptor is intended for incoming messages, add its class-name to the list of remoting-incoming-interceptors . <configuration> <core> ... <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> ... </core> </configuration> If your interceptor is intended for outgoing messages, add its class-name to the list of remoting-outgoing-interceptors . <configuration> <core> ... <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration> Additional resources To learn how to configure interceptors in the AMQ Core Protocol JMS client, see Using message interceptors in the AMQ Core Protocol JMS documentation. | [
"package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This AMQPMessage has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This MqttMessage has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This StompFrame has an acceptable size.\"); return true; } return false; } }",
"<configuration> <core> <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> </core> </configuration>",
"<configuration> <core> <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/configuring_amq_broker/interceptors |
Automating system administration by using RHEL system roles | Automating system administration by using RHEL system roles Red Hat Enterprise Linux 8 Consistent and repeatable configuration of RHEL deployments across multiple hosts with Red Hat Ansible Automation Platform playbooks Red Hat Customer Content Services | [
"useradd ansible",
"su - ansible",
"[ansible@control-node]USD ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ansible/.ssh/id_rsa): Enter passphrase (empty for no passphrase): <password> Enter same passphrase again: <password>",
"[defaults] inventory = /home/ansible/inventory remote_user = ansible [privilege_escalation] become = True become_method = sudo become_user = root become_ask_pass = True",
"managed-node-01.example.com [US] managed-node-02.example.com ansible_host=192.0.2.100 managed-node-03.example.com",
"yum install rhel-system-roles",
"[ansible@control-node]USD ansible-galaxy collection install redhat.rhel_system_roles",
"useradd ansible",
"passwd ansible Changing password for user ansible. New password: <password> Retype new password: <password> passwd: all authentication tokens updated successfully.",
"[ansible@control-node]USD ssh-copy-id managed-node-01.example.com /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: \"/home/ansible/.ssh/id_rsa.pub\" The authenticity of host 'managed-node-01.example.com (192.0.2.100)' can't be established. ECDSA key fingerprint is SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.",
"Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys",
"[email protected]'s password: <password> Number of key(s) added: 1 Now try logging into the machine, with: \"ssh 'managed-node-01.example.com'\" and check to make sure that only the key(s) you wanted were added.",
"[ansible@control-node]USD ssh managed-node-01.example.com whoami ansible",
"visudo /etc/sudoers.d/ansible",
"ansible ALL=(ALL) ALL",
"ansible ALL=(ALL) NOPASSWD: ALL",
"[ansible@control-node]USD ansible all -m ping BECOME password: <password> managed-node-01.example.com | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" }",
"[ansible@control-node]USD ansible all -m command -a whoami BECOME password: <password> managed-node-01.example.com | CHANGED | rc=0 >> root",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"ansible-vault view vault.yml Vault password: <vault_password> my_secret: \"yJJvPqhsiusmmPPZdnjndkdnYNDjdj782meUZcw\"",
"ansible-vault edit vault.yml Vault password: <vault_password>",
"ansible-vault encrypt vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password> Encryption successful",
"ansible-vault decrypt vault.yml Vault password: <vault_password> Decryption successful",
"ansible-vault rekey vault.yml Vault password: <vault_password> New Vault password: <vault_password> Confirm New Vault password: <vault_password> Rekey successful",
"--- - name: Create user accounts for all servers hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create user from vault.yml file user: name: \"{{ username }}\" password: \"{{ pwhash }}\"",
"--- - name: Set boot device to be used on next boot hosts: managed-node-01.example.com tasks: - name: Ensure boot device is HD redhat.rhel_mgmt.ipmi_boot: user: <admin_user> password: <password> bootdev: hd",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Power management hosts: managed-node-01.example.com tasks: - name: Ensure machine is powered on redhat.rhel_mgmt.ipmi_power: user: <admin_user> password: <password> state: on",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Get CPU inventory redhat.rhel_mgmt.redfish_info: baseuri: \" <URI> \" username: \" <username> \" password: \" <password> \" category: Systems command: GetCpuInventory register: result",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Power on system redhat.rhel_mgmt.redfish_command: baseuri: \" <URI> \" username: \" <username> \" password: \" <password> \" category: Systems command: PowerOn",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manages out-of-band controllers using Redfish APIs hosts: managed-node-01.example.com tasks: - name: Set BootMode to UEFI redhat.rhel_mgmt.redfish_config: baseuri: \" <URI> \" username: \" <username> \" password: \" <password> \" category: Systems command: SetBiosAttributes bios_attributes: BootMode: Uefi",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"usr: administrator pwd: <password>",
"--- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: \"{{ usr }}\" ad_integration_password: \"{{ pwd }}\" ad_integration_realm: \"ad.example.com\" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: \"time_server.ad.example.com\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'getent passwd [email protected]' [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash",
"--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update existing boot loader entries ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_settings: - kernel: path: /boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64 options: - name: quiet state: present bootloader_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.command -a 'grubby --info=ALL' managed-node-01.example.com | CHANGED | rc=0 >> index=1 kernel=\"/boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64\" args=\"ro crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap USDtuned_params quiet \" root=\"/dev/mapper/rhel-root\" initrd=\"/boot/initramfs-5.14.0-362.24.1.el9_3.aarch64.img USDtuned_initrd\" title=\"Red Hat Enterprise Linux (5.14.0-362.24.1.el9_3.aarch64) 9.4 (Plow)\" id=\"2c9ec787230141a9b087f774955795ab-5.14.0-362.24.1.el9_3.aarch64\"",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"pwd: <password>",
"--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Set the bootloader password ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_password: \"{{ pwd }}\" bootloader_reboot_ok: true",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Configuration and management of the GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update the boot loader timeout ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_timeout: 10",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.reboot managed-node-01.example.com | CHANGED => { \"changed\": true, \"elapsed\": 21, \"rebooted\": true }",
"ansible managed-node-01.example.com -m ansible.builtin.command -a \"grep 'timeout' /boot/grub2/grub.cfg\" managed-node-01.example.com | CHANGED | rc=0 >> if [ xUSDfeature_timeout_style = xy ] ; then set timeout_style=menu set timeout=10 Fallback normal timeout code in case the timeout_style feature is set timeout=10 if [ xUSDfeature_timeout_style = xy ] ; then set timeout_style=menu set timeout=10 set orig_timeout_style=USD{timeout_style} set orig_timeout=USD{timeout} # timeout_style=menu + timeout=0 avoids the countdown code keypress check set timeout_style=menu set timeout=10 set timeout_style=hidden set timeout=10 if [ xUSDfeature_timeout_style = xy ]; then if [ \"USD{menu_show_once_timeout}\" ]; then set timeout_style=menu set timeout=10 unset menu_show_once_timeout save_env menu_show_once_timeout",
"--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Gather information about the boot loader configuration ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_gather_facts: true - name: Display the collected boot loader configuration information debug: var: bootloader_facts",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"\"bootloader_facts\": [ { \"args\": \"ro crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap USDtuned_params quiet\", \"default\": true, \"id\": \"2c9ec787230141a9b087f774955795ab-5.14.0-362.24.1.el9_3.aarch64\", \"index\": \"1\", \"initrd\": \"/boot/initramfs-5.14.0-362.24.1.el9_3.aarch64.img USDtuned_initrd\", \"kernel\": \"/boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64\", \"root\": \"/dev/mapper/rhel-root\", \"title\": \"Red Hat Enterprise Linux (5.14.0-362.24.1.el9_3.aarch64) 9.4 (Plow)\" } ]",
"--- - name: Create certificates hosts: managed-node-01.example.com tasks: - name: Create a self-signed certificate ansible.builtin.include_role: name: rhel-system-roles.certificate vars: certificate_requests: - name: web-server ca: ipa dns: www.example.com principal: HTTP/[email protected] run_before: systemctl stop httpd.service run_after: systemctl start httpd.service",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'getcert list' Number of certificates and requests being tracked: 1. Request ID '20240918142211': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/web-server.key' certificate: type=FILE,location='/etc/pki/tls/certs/web-server.crt' CA: IPA issuer: CN=Certificate Authority,O=EXAMPLE.COM subject: CN=www.example.com issued: 2024-09-18 16:22:11 CEST expires: 2025-09-18 16:22:10 CEST dns: www.example.com key usage: digitalSignature,keyEncipherment eku: id-kp-serverAuth,id-kp-clientAuth pre-save command: systemctl stop httpd.service post-save command: systemctl start httpd.service track: yes auto-renew: yes",
"--- - name: Create certificates hosts: managed-node-01.example.com tasks: - name: Create a self-signed certificate ansible.builtin.include_role: name: rhel-system-roles.certificate vars: certificate_requests: - name: web-server ca: self-sign dns: test.example.com",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'getcert list' Number of certificates and requests being tracked: 1. Request ID '20240918133610': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/web-server.key' certificate: type=FILE,location='/etc/pki/tls/certs/web-server.crt' CA: local issuer: CN=c32b16d7-5b1a4c5a-a953a711-c3ca58fb,CN=Local Signing Authority subject: CN=test.example.com issued: 2024-09-18 15:36:10 CEST expires: 2025-09-18 15:36:09 CEST dns: test.example.com key usage: digitalSignature,keyEncipherment eku: id-kp-serverAuth,id-kp-clientAuth pre-save command: post-save command: track: yes auto-renew: yes",
"--- - name: Manage the RHEL web console hosts: managed-node-01.example.com tasks: - name: Install RHEL web console ansible.builtin.include_role: name: rhel-system-roles.cockpit vars: cockpit_packages: default cockpit_port: 9050 cockpit_manage_selinux: true cockpit_manage_firewall: true cockpit_certificates: - name: /etc/cockpit/ws-certs.d/01-certificate dns: ['localhost', 'www.example.com'] ca: ipa",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active",
"ansible-playbook --syntax-check ~/verify_playbook.yml",
"ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { \"crypto_policies_active\": \"FUTURE\" }",
"--- - name: Configuring fapolicyd hosts: managed-node-01.example.com tasks: - name: Allow only executables installed from RPM database and specific files ansible.builtin.include_role: name: rhel-system-roles.fapolicyd vars: fapolicyd_setup_permissive: false fapolicyd_setup_integrity: sha256 fapolicyd_setup_trust: rpmdb,file fapolicyd_add_trusted_file: - <path_to_allowed_command> - <path_to_allowed_service>",
"ansible-playbook ~/playbook.yml --syntax-check",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'su -c \"/bin/not_authorized_application \" <user_name> ' bash: line 1: /bin/not_authorized_application: Operation not permitted non-zero return code",
"--- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - previous: replaced",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'",
"--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports' managed-node-01.example.com | CHANGED | rc=0 >> port=8080:proto=tcp:toport=443:toaddr=",
"--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --zone=dmz --list-all' managed-node-01.example.com | CHANGED | rc=0 >> dmz (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: https ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks:",
"- hosts: node1 node2 vars: ha_cluster_cluster_present: false roles: - rhel-system-roles.ha_cluster",
"ha_cluster_pcs_permission_list: - type: group name: hacluster allow_list: - grant - read - write",
"ha_cluster_transport: type: knet options: - name: option1_name value: option1_value - name: option2_name value: option2_value links: - - name: option1_name value: option1_value - name: option2_name value: option2_value - - name: option1_name value: option1_value - name: option2_name value: option2_value compression: - name: option1_name value: option1_value - name: option2_name value: option2_value crypto: - name: option1_name value: option1_value - name: option2_name value: option2_value",
"ha_cluster_totem: options: - name: option1_name value: option1_value - name: option2_name value: option2_value",
"ha_cluster_quorum: options: - name: option1_name value: option1_value - name: option2_name value: option2_value device: model: string model_options: - name: option1_name value: option1_value - name: option2_name value: option2_value generic_options: - name: option1_name value: option1_value - name: option2_name value: option2_value heuristics_options: - name: option1_name value: option1_value - name: option2_name value: option2_value",
"ha_cluster_cluster_properties: - attrs: - name: property1_name value: property1_value - name: property2_name value: property2_value",
"- hosts: node1 node2 vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: password ha_cluster_cluster_properties: - attrs: - name: stonith-enabled value: 'true' - name: no-quorum-policy value: stop roles: - rhel-system-roles.ha_cluster",
"ha_cluster_node_options: - node_name: node1 attributes: - attrs: - name: attribute1 value: value1_node1 - name: attribute2 value: value2_node1 - node_name: node2 attributes: - attrs: - name: attribute1 value: value1_node2 - name: attribute2 value: value2_node2",
"- id: resource-id agent: resource-agent instance_attrs: - attrs: - name: attribute1_name value: attribute1_value - name: attribute2_name value: attribute2_value meta_attrs: - attrs: - name: meta_attribute1_name value: meta_attribute1_value - name: meta_attribute2_name value: meta_attribute2_value copy_operations_from_agent: bool operations: - action: operation1-action attrs: - name: operation1_attribute1_name value: operation1_attribute1_value - name: operation1_attribute2_name value: operation1_attribute2_value - action: operation2-action attrs: - name: operation2_attribute1_name value: operation2_attribute1_value - name: operation2_attribute2_name value: operation2_attribute2_value",
"ha_cluster_resource_groups: - id: group-id resource_ids: - resource1-id - resource2-id meta_attrs: - attrs: - name: group_meta_attribute1_name value: group_meta_attribute1_value - name: group_meta_attribute2_name value: group_meta_attribute2_value",
"ha_cluster_resource_clones: - resource_id: resource-to-be-cloned promotable: true id: custom-clone-id meta_attrs: - attrs: - name: clone_meta_attribute1_name value: clone_meta_attribute1_value - name: clone_meta_attribute2_name value: clone_meta_attribute2_value",
"ha_cluster_resource_defaults: meta_attrs: - id: defaults-set-1-id rule: rule-string score: score-value attrs: - name: meta_attribute1_name value: meta_attribute1_value - name: meta_attribute2_name value: meta_attribute2_value - id: defaults-set-2-id rule: rule-string score: score-value attrs: - name: meta_attribute3_name value: meta_attribute3_value - name: meta_attribute4_name value: meta_attribute4_value",
"ha_cluster_stonith_levels: - level: 1..9 target: node_name target_pattern: node_name_regular_expression target_attribute: node_attribute_name target_value: node_attribute_value resource_ids: - fence_device_1 - fence_device_2 - level: 1..9 target: node_name target_pattern: node_name_regular_expression target_attribute: node_attribute_name target_value: node_attribute_value resource_ids: - fence_device_1 - fence_device_2",
"ha_cluster_constraints_location: - resource: id: resource-id node: node-name id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_location: - resource: pattern: resource-pattern node: node-name id: constraint-id options: - name: score value: score-value - name: resource-discovery value: resource-discovery-value",
"ha_cluster_constraints_location: - resource: id: resource-id role: resource-role rule: rule-string id: constraint-id options: - name: score value: score-value - name: resource-discovery value: resource-discovery-value",
"ha_cluster_constraints_location: - resource: pattern: resource-pattern role: resource-role rule: rule-string id: constraint-id options: - name: score value: score-value - name: resource-discovery value: resource-discovery-value",
"ha_cluster_constraints_colocation: - resource_follower: id: resource-id1 role: resource-role1 resource_leader: id: resource-id2 role: resource-role2 id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_colocation: - resource_sets: - resource_ids: - resource-id1 - resource-id2 options: - name: option-name value: option-value id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_order: - resource_first: id: resource-id1 action: resource-action1 resource_then: id: resource-id2 action: resource-action2 id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_order: - resource_sets: - resource_ids: - resource-id1 - resource-id2 options: - name: option-name value: option-value id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_ticket: - resource: id: resource-id role: resource-role ticket: ticket-name id: constraint-id options: - name: loss-policy value: loss-policy-value - name: option-name value: option-value",
"ha_cluster_constraints_ticket: - resource_sets: - resource_ids: - resource-id1 - resource-id2 options: - name: option-name value: option-value ticket: ticket-name id: constraint-id options: - name: option-name value: option-value",
"all: hosts: node1: ha_cluster: node_name: node-A pcs_address: node1-address corosync_addresses: - 192.168.1.11 - 192.168.2.11 node2: ha_cluster: node_name: node-B pcs_address: node2-address:2224 corosync_addresses: - 192.168.1.12 - 192.168.2.12",
"all: hosts: node1: ha_cluster: sbd_watchdog_modules: - module1 - module2 sbd_watchdog: /dev/watchdog2 sbd_devices: - /dev/disk/by-id/000001 - /dev/disk/by-id/000001 - /dev/disk/by-id/000003 node2: ha_cluster: sbd_watchdog_modules: - module1 sbd_watchdog_modules_blocklist: - module2 sbd_watchdog: /dev/watchdog1 sbd_devices: - /dev/disk/by-id/000001 - /dev/disk/by-id/000002 - /dev/disk/by-id/000003",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create TLS certificates and key files in a high availability cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_pcsd_certificates: - name: FILENAME common_name: \"{{ ansible_hostname }}\" ca: self-sign",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with minimum required parameters and no fencing ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with fencing and resources ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: simple-resource agent: 'ocf:pacemaker:Dummy' - id: resource-with-options agent: 'ocf:pacemaker:Dummy' instance_attrs: - attrs: - name: fake value: fake-value - name: passwd value: passwd-value meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' operations: - action: start attrs: - name: timeout value: '30s' - action: monitor attrs: - name: timeout value: '5' - name: interval value: '1min' - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: simple-clone agent: 'ocf:pacemaker:Dummy' - id: clone-with-options agent: 'ocf:pacemaker:Dummy' ha_cluster_resource_groups: - id: simple-group resource_ids: - dummy-1 - dummy-2 meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' - id: cloned-group resource_ids: - dummy-3 ha_cluster_resource_clones: - resource_id: simple-clone - resource_id: clone-with-options promotable: yes id: custom-clone-id meta_attrs: - attrs: - name: clone-max value: '2' - name: clone-node-max value: '1' - resource_id: cloned-group promotable: yes",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with fencing and resource operation defaults ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # Set a different resource-stickiness value during # and outside work hours. This allows resources to # automatically move back to their most # preferred hosts, but at a time that # does not interfere with business activities. ha_cluster_resource_defaults: meta_attrs: - id: core-hours rule: date-spec hours=9-16 weekdays=1-5 score: 2 attrs: - name: resource-stickiness value: INFINITY - id: after-hours score: 1 attrs: - name: resource-stickiness value: 0 # Default the timeout on all 10-second-interval # monitor actions on IPaddr2 resources to 8 seconds. ha_cluster_resource_operation_defaults: meta_attrs: - rule: resource ::IPaddr2 and op monitor interval=10s score: INFINITY attrs: - name: timeout value: 8s",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Configure a cluster that defines fencing levels ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: apc1 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc1.example.com - name: username value: user - name: password value: \"{{ fence1_password }}\" - name: pcmk_host_map value: node1:1;node2:2 - id: apc2 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc2.example.com - name: username value: user - name: password value: \"{{ fence2_password }}\" - name: pcmk_host_map value: node1:1;node2:2 # Nodes have redundant power supplies, apc1 and apc2. Cluster must # ensure that when attempting to reboot a node, both power # supplies # are turned off before either power supply is turned # back on. ha_cluster_stonith_levels: - level: 1 target: node1 resource_ids: - apc1 - apc2 - level: 1 target: node2 resource_ids: - apc1 - apc2",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with resource constraints ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # In order to use constraints, we need resources # the constraints will apply to. ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: dummy-4 agent: 'ocf:pacemaker:Dummy' - id: dummy-5 agent: 'ocf:pacemaker:Dummy' - id: dummy-6 agent: 'ocf:pacemaker:Dummy' # location constraints ha_cluster_constraints_location: # resource ID and node name - resource: id: dummy-1 node: node1 options: - name: score value: 20 # resource pattern and node name - resource: pattern: dummy-\\d+ node: node1 options: - name: score value: 10 # resource ID and rule - resource: id: dummy-2 rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28' # resource pattern and rule - resource: pattern: dummy-\\d+ rule: node-type eq weekend and date-spec weekdays=6-7 # colocation constraints ha_cluster_constraints_colocation: # simple constraint - resource_leader: id: dummy-3 resource_follower: id: dummy-4 options: - name: score value: -5 # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 - resource_ids: - dummy-5 - dummy-6 options: - name: sequential value: \"false\" options: - name: score value: 20 # order constraints ha_cluster_constraints_order: # simple constraint - resource_first: id: dummy-1 resource_then: id: dummy-6 options: - name: symmetrical value: \"false\" # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 options: - name: require-all value: \"false\" - name: sequential value: \"false\" - resource_ids: - dummy-3 - resource_ids: - dummy-4 - dummy-5 options: - name: sequential value: \"false\" # ticket constraints ha_cluster_constraints_ticket: # simple constraint - resource: id: dummy-1 ticket: ticket1 options: - name: loss-policy value: stop # set constraint - resource_sets: - resource_ids: - dummy-3 - dummy-4 - dummy-5 ticket: ticket2 options: - name: loss-policy value: fence",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster that configures Corosync values ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_transport: type: knet options: - name: ip_version value: ipv4-6 - name: link_mode value: active links: - - name: linknumber value: 1 - name: link_priority value: 5 - - name: linknumber value: 0 - name: link_priority value: 10 compression: - name: level value: 5 - name: model value: zlib crypto: - name: cipher value: none - name: hash value: none ha_cluster_totem: options: - name: block_unlisted_ips value: 'yes' - name: send_join value: 0 ha_cluster_quorum: options: - name: auto_tie_breaker value: 1 - name: wait_for_all value: 1",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Create a high availability cluster that uses SBD node fencing hosts: node1 node2 roles: - rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: <password> ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_sbd_enabled: yes ha_cluster_sbd_options: - name: delay-start value: 'no' - name: startmode value: always - name: timeout-action value: 'flush,reboot' - name: watchdog-timeout value: 30 # Suggested optimal values for SBD timeouts: # watchdog-timeout * 2 = msgwait-timeout (set automatically) # msgwait-timeout * 1.2 = stonith-timeout ha_cluster_cluster_properties: - attrs: - name: stonith-timeout value: 72 ha_cluster_resource_primitives: - id: fence_sbd agent: 'stonith:fence_sbd' instance_attrs: - attrs: # taken from host_vars - name: devices value: \"{{ ha_cluster.sbd_devices | join(',') }}\" - name: pcmk_delay_base value: 30",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Configure a host with a quorum device hosts: nodeQ vars_files: - vault.yml tasks: - name: Create a quorum device for the cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_present: false ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_qnetd: present: true",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.yml",
"ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml",
"--- - name: Configure a cluster to use a quorum device hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster that uses a quorum device ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_quorum: device: model: net model_options: - name: host value: nodeQ - name: algorithm value: lms",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.yml",
"ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create a cluster that defines node attributes ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_node_options: - node_name: node1 attributes: - attrs: - name: attribute1 value: value1A - name: attribute2 value: value2A - node_name: node2 attributes: - attrs: - name: attribute1 value: value1B - name: attribute2 value: value2B",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: z1.example.com z2.example.com vars_files: - vault.yml tasks: - name: Configure active/passive Apache server in a high availability cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_cluster_name: my_cluster ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_fence_agent_packages: - fence-agents-apc-snmp ha_cluster_resource_primitives: - id: myapc agent: stonith:fence_apc_snmp instance_attrs: - attrs: - name: ipaddr value: zapc.example.com - name: pcmk_host_map value: z1.example.com:1;z2.example.com:2 - name: login value: apc - name: passwd value: apc - id: my_lvm agent: ocf:heartbeat:LVM-activate instance_attrs: - attrs: - name: vgname value: my_vg - name: vg_access_mode value: system_id - id: my_fs agent: Filesystem instance_attrs: - attrs: - name: device value: /dev/my_vg/my_lv - name: directory value: /var/www - name: fstype value: xfs - id: VirtualIP agent: IPaddr2 instance_attrs: - attrs: - name: ip value: 198.51.100.3 - name: cidr_netmask value: 24 - id: Website agent: apache instance_attrs: - attrs: - name: configfile value: /etc/httpd/conf/httpd.conf - name: statusurl value: http://127.0.0.1/server-status ha_cluster_resource_groups: - id: apachegroup resource_ids: - my_lvm - my_fs - VirtualIP - Website",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true",
"/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /var/run/httpd-Website.pid\" -k graceful > /dev/null 2>/dev/null || true",
"/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /run/httpd.pid\" -k graceful > /dev/null 2>/dev/null || true",
"pcs status Cluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.com",
"Hello",
"pcs node standby z1.example.com",
"pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com",
"pcs node unstandby z1.example.com",
"--- - name: Configure journald hosts: managed-node-01.example.com tasks: - name: Configure persistent logging ansible.builtin.include_role: name: rhel-system-roles.journald vars: journald_persistent: true journald_max_disk_size: <size> journald_per_user: true journald_sync_interval: <interval>",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configuring kernel crash dumping hosts: managed-node-01.example.com tasks: - name: Setting the kdump directory. ansible.builtin.include_role: name: rhel-system-roles.kdump vars: kdump_target: type: raw location: /dev/sda1 kdump_path: /var/crash/vmcore kernel_settings_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'grep crashkernel /proc/cmdline'",
"--- - name: Configuring kernel settings hosts: managed-node-01.example.com tasks: - name: Configure hugepages, packet size for loopback device, and limits on simultaneously open files. ansible.builtin.include_role: name: rhel-system-roles.kernel_settings vars: kernel_settings_sysctl: - name: fs.file-max value: 400000 - name: kernel.threads-max value: 65536 kernel_settings_sysfs: - name: /sys/class/net/lo/mtu value: 65000 kernel_settings_transparent_hugepages: madvise kernel_settings_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'sysctl fs.file-max kernel.threads-max net.ipv6.conf.lo.mtu' ansible managed-node-01.example.com -m command -a 'cat /sys/kernel/mm/transparent_hugepage/enabled'",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: \"!contains\" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run rsyslogd: End of config validation run. Bye.",
"logger error",
"cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.",
"logger test",
"cat /var/log/ <host2.example.com> /messages Aug 5 13:48:31 <host2.example.com> root[6778]: test",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Monitoring performance metrics hosts: managed-node-01.example.com tasks: - name: Configure Performance Co-Pilot ansible.builtin.include_role: name: rhel-system-roles.metrics vars: metrics_retention_days: 14 metrics_manage_firewall: true metrics_manage_selinux: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'pminfo -f kernel.all.load'",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"metrics_usr: <username> metrics_pwd: <password>",
"--- - name: Monitoring performance metrics hosts: managed-node-01.example.com tasks: - name: Configure Performance Co-Pilot ansible.builtin.include_role: name: rhel-system-roles.metrics vars: metrics_retention_days: 14 metrics_manage_firewall: true metrics_manage_selinux: true metrics_username: \"{{ metrics_usr }}\" metrics_password: \"{{ metrics_pwd }}\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"pminfo -fmdt -h pcp://managed-node-01.example.com?username= <user> proc.fd.count Password: <password> proc.fd.count inst [844 or \"000844 /var/lib/pcp/pmdas/proc/pmdaproc\"] value 5",
"pminfo -fmdt -h pcp://managed-node-01.example.com proc.fd.count proc.fd.count Error: No permission to perform requested operation",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"grafana_admin_pwd: <password>",
"--- - name: Monitoring performance metrics hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Set up Grafana to monitor multiple hosts ansible.builtin.include_role: name: rhel-system-roles.metrics vars: metrics_graph_service: true metrics_query_service: true metrics_monitored_hosts: - <pcp_host_1.example.com> - <pcp_host_2.example.com> metrics_manage_firewall: true metrics_manage_selinux: true - name: Set Grafana admin password ansible.builtin.shell: cmd: grafana-cli admin reset-admin-password \"{{ grafana_admin_pwd }}\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Deploy a Tang server hosts: tang.server.example.com tasks: - name: Install and configure periodic key rotation ansible.builtin.include_role: name: rhel-system-roles.nbde_server vars: nbde_server_rotate_keys: yes nbde_server_manage_firewall: true nbde_server_manage_selinux: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'echo test | clevis encrypt tang '{\"url\":\" <tang.server.example.com> \"}' -y | clevis decrypt' test",
"--- - name: Configure clients for unlocking of encrypted volumes by Tang servers hosts: managed-node-01.example.com tasks: - name: Create NBDE client bindings ansible.builtin.include_role: name: rhel-system-roles.nbde_client vars: nbde_client_bindings: - device: /dev/rhel/root encryption_key_src: /etc/luks/keyfile nbde_client_early_boot: true state: present servers: - http://server1.example.com - http://server2.example.com - device: /dev/rhel/swap encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'clevis luks list -d /dev/rhel/root' 1: tang '{\"url\":\" <http://server1.example.com/> \"}' 2: tang '{\"url\":\" <http://server2.example.com/> \"}'",
"ansible managed-node-01.example.com -m command -a 'lsinitrd | grep clevis-luks' lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path ...",
"clients: managed-node-01.example.com: ip_v4: 192.0.2.1 gateway_v4: 192.0.2.254 netmask_v4: 255.255.255.0 interface: enp1s0 managed-node-02.example.com: ip_v4: 192.0.2.2 gateway_v4: 192.0.2.254 netmask_v4: 255.255.255.0 interface: enp1s0",
"- name: Configure clients for unlocking of encrypted volumes by Tang servers hosts: managed-node-01.example.com,managed-node-02.example.com vars_files: - ~/static-ip-settings-clients.yml tasks: - name: Create NBDE client bindings ansible.builtin.include_role: name: rhel-system-roles.network vars: nbde_client_bindings: - device: /dev/rhel/root encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com - device: /dev/rhel/swap encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com - name: Configure a Clevis client with static IP address during early boot ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_settings: - kernel: ALL options: - name: ip value: \"{{ clients[inventory_hostname]['ip_v4'] }}::{{ clients[inventory_hostname]['gateway_v4'] }}:{{ clients[inventory_hostname]['netmask_v4'] }}::{{ clients[inventory_hostname]['interface'] }}:none\"",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe",
"--- - name: Configure the network hosts: managed-node-01.example.com,managed-node-02.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: \"{{ interface }}\" interface_name: \"{{ interface }}\" type: ethernet autoconnect: yes ip: address: - \"{{ ip_v4 }}\" - \"{{ ip_v6 }}\" gateway4: \"{{ gateway_v4 }}\" gateway6: \"{{ gateway_v6 }}\" dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"pwd: <password>",
"--- - name: Configure an Ethernet connection with 802.1X authentication hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0600 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Ethernet connection profile with static IP address settings and 802.1X ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/private/client.key\" private_key_password: \"{{ pwd }}\" client_cert: \"/etc/pki/tls/certs/client.crt\" ca_cert: \"/etc/pki/ca-trust/source/anchors/ca.crt\" domain_suffix_match: example.com state: up",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bond connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bond profile - name: bond0 type: bond interface_name: bond0 ip: dhcp4: yes auto6: yes bond: mode: active-backup state: up # Port profile for the 1st Ethernet device - name: bond0-port1 interface_name: enp7s0 type: ethernet controller: bond0 state: up # Port profile for the 2nd Ethernet device - name: bond0-port2 interface_name: enp8s0 type: ethernet controller: bond0 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ip -d addr show enp1s0.10' managed-node-01.example.com | CHANGED | rc=0 >> 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ip link show master bridge0' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff",
"ansible managed-node-01.example.com -m command -a 'bridge link show' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 198.51.100.20/24 - 2001:db8:1::1/64 gateway4: 198.51.100.254 gateway6: 2001:db8:1::fffe dns: - 198.51.100.200 - 2001:db8:1::ffbb dns_search: - example.com state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"gateway\": \"198.51.100.254\", \"interface\": \"enp1s0\", }, \"ansible_default_ipv6\": { \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", }",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp7s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com route: - network: 198.51.100.0 prefix: 24 gateway: 192.0.2.10 - network: 2001:db8:2:: prefix: 64 gateway: 2001:db8:1::10 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ip -4 route' managed-node-01.example.com | CHANGED | rc=0 >> 198.51.100.0/24 via 192.0.2.10 dev enp7s0",
"ansible managed-node-01.example.com -m command -a 'ip -6 route' managed-node-01.example.com | CHANGED | rc=0 >> 2001:db8:2::/64 via 2001:db8:1::10 dev enp7s0 metric 1024 pref medium",
"--- - name: Configuring policy-based routing hosts: managed-node-01.example.com tasks: - name: Routing traffic from a specific subnet to a different default gateway ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: Provider-A interface_name: enp7s0 type: ethernet autoconnect: True ip: address: - 198.51.100.1/30 gateway4: 198.51.100.2 dns: - 198.51.100.200 state: up zone: external - name: Provider-B interface_name: enp1s0 type: ethernet autoconnect: True ip: address: - 192.0.2.1/30 route: - network: 0.0.0.0 prefix: 0 gateway: 192.0.2.2 table: 5000 state: up zone: external - name: Internal-Workstations interface_name: enp8s0 type: ethernet autoconnect: True ip: address: - 10.0.0.1/24 route: - network: 10.0.0.0 prefix: 24 table: 5000 routing_rule: - priority: 5 from: 10.0.0.0/24 table: 5000 state: up zone: trusted - name: Servers interface_name: enp9s0 type: ethernet autoconnect: True ip: address: - 203.0.113.1/24 state: up zone: trusted",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms",
"ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default",
"ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102",
"firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0",
"firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_enp1s0\": { \"active\": true, \"device\": \"enp1s0\", \"features\": { \"rx_gro_hw\": \"off, \"tx_gso_list\": \"on, \"tx_sctp_segmentation\": \"off\", }",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> rx-frames: 128 tx-frames: 128",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: IPoIB connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # InfiniBand connection mlx4_ib0 - name: mlx4_ib0 interface_name: mlx4_ib0 type: infiniband # IPoIB device mlx4_ib0.8002 on top of mlx4_ib0 - name: mlx4_ib0.8002 type: infiniband autoconnect: yes infiniband: p_key: 0x8002 transport_mode: datagram parent: mlx4_ib0 ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ip address show mlx4_ib0.8002' managed-node-01.example.com | CHANGED | rc=0 >> inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute ib0.8002 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/64 scope link tentative noprefixroute valid_lft forever preferred_lft forever",
"ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/pkey' managed-node-01.example.com | CHANGED | rc=0 >> 0x8002",
"ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/mode' managed-node-01.example.com | CHANGED | rc=0 >> datagram",
"vars: network_state: interfaces: - name: enp7s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true",
"vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up",
"vars: network_state: interfaces: - name: enp7s0 type: ethernet state: down",
"vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: down",
"- name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Create a web application and a database ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_firewall: - port: 8080-8081/tcp state: enabled - port: 12340/tcp state: enabled podman_selinux_ports: - ports: 8080-8081 setype: http_port_t podman_kube_specs: - state: started run_as_user: dbuser run_as_group: dbgroup kube_file_content: apiVersion: v1 kind: Pod metadata: name: db spec: containers: - name: db image: quay.io/linux-system-roles/mysql:5.6 ports: - containerPort: 1234 hostPort: 12340 volumeMounts: - mountPath: /var/lib/db:Z name: db volumes: - name: db hostPath: path: /var/lib/db - state: started run_as_user: webapp run_as_group: webapp kube_file_src: /path/to/webapp.yml",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"- name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Start Apache server on port 8080 ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_firewall: - port: 8080/tcp state: enabled podman_kube_specs: - state: started kube_file_content: apiVersion: v1 kind: Pod metadata: name: ubi8-httpd spec: containers: - name: ubi8-httpd image: registry.access.redhat.com/ubi8/httpd-24 ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /var/www/html:Z name: ubi8-html volumes: - name: ubi8-html persistentVolumeClaim: claimName: ubi8-html-volume",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"cat ~/certificate.pem -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- cat ~/key.pem -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY-----",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"root_password: <root_password> certificate: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- key: |- -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY-----",
"- name: Deploy a wordpress CMS with MySQL database hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and run the container ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_activate_systemd_unit: false podman_quadlet_specs: - name: quadlet-demo type: network file_content: | [Network] Subnet=192.168.30.0/24 Gateway=192.168.30.1 Label=app=wordpress - file_src: quadlet-demo-mysql.volume - template_src: quadlet-demo-mysql.container.j2 - file_src: envoy-proxy-configmap.yml - file_src: quadlet-demo.yml - file_src: quadlet-demo.kube activate_systemd_unit: true podman_firewall: - port: 8000/tcp state: enabled - port: 9000/tcp state: enabled podman_secrets: - name: mysql-root-password-container state: present skip_existing: true data: \"{{ root_password }}\" - name: mysql-root-password-kube state: present skip_existing: true data: | apiVersion: v1 data: password: \"{{ root_password | b64encode }}\" kind: Secret metadata: name: mysql-root-password-kube - name: envoy-certificates state: present skip_existing: true data: | apiVersion: v1 data: certificate.key: {{ key | b64encode }} certificate.pem: {{ certificate | b64encode }} kind: Secret metadata: name: envoy-certificates",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Manage Postfix hosts: managed-node-01.example.com tasks: - name: Install postfix ansible.builtin.package: name: postfix state: present - name: Configure null client for only sending outgoing emails ansible.builtin.include_role: name: rhel-system-roles.postfix vars: postfix_conf: myhostname: server.example.com myorigin: \"USDmydomain\" relayhost: smtp.example.com inet_interfaces: loopback-only mydestination: \"\" relay_domains: \"{{ lookup('ansible.builtin.pipe', 'postconf -h default_database_type') }}:/etc/postfix/relay_domains\" postfix_files: - name: relay_domains postmap: true content: | example.com OK example.net OK",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"pwd: <password>",
"--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create directory for TLS certificate and key ansible.builtin.file: path: /etc/postgresql/ state: directory mode: 755 - name: Copy CA certificate ansible.builtin.copy: src: \"~/{{ inventory_hostname }}.crt\" dest: \"/etc/postgresql/server.crt\" - name: Copy private key ansible.builtin.copy: src: \"~/{{ inventory_hostname }}.key\" dest: \"/etc/postgresql/server.key\" mode: 0600 - name: PostgreSQL with an existing private key and certificate ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: \"16\" postgresql_password: \"{{ pwd }}\" postgresql_ssl_enable: true postgresql_cert_name: \"/etc/postgresql/server\" postgresql_server_conf: listen_addresses: \"'*'\" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"psql \"postgresql://[email protected]:5432\" -c '\\conninfo' Password for user postgres: You are connected to database \"postgres\" as user \"postgres\" on host \"192.0.2.1\" at port \"5432\". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"pwd: <password>",
"--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: PostgreSQL with certificates issued by IdM ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: \"16\" postgresql_password: \"{{ pwd }}\" postgresql_ssl_enable: true postgresql_certificates: - name: postgresql_cert dns: \"{{ inventory_hostname }}\" ca: ipa principal: \"postgresql/{{ inventory_hostname }}@EXAMPLE.COM\" postgresql_server_conf: listen_addresses: \"'*'\" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"psql \"postgresql://[email protected]:5432\" -c '\\conninfo' Password for user postgres: You are connected to database \"postgres\" as user \"postgres\" on host \"192.0.2.1\" at port \"5432\". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"activationKey: <activation_key> username: <username> password: <password>",
"--- - name: Registering system using activation key and organization ID hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: activation_keys: keys: - \"{{ activationKey }}\" rhc_organization: organizationID",
"--- - name: Registering system with username and password hosts: managed-node-01.example.com vars_files: - vault.yml vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" roles: - role: rhel-system-roles.rhc",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"activationKey: <activation_key>",
"--- - name: Register to the custom registration server and CDN hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: activation_keys: keys: - \"{{ activationKey }}\" rhc_organization: organizationID rhc_server: hostname: example.com port: 443 prefix: /rhsm rhc_baseurl: http://example.com/pulp/content",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Disable Insights connection hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_insights: state: absent",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Enable repository hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_repositories: - {name: \"RepositoryName\", state: enabled}",
"--- - name: Disable repository hosts: managed-node-01.example.com vars: rhc_repositories: - {name: \"RepositoryName\", state: disabled} roles: - role: rhel-system-roles.rhc",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Set Release hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_release: \"8.6\"",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"username: <username> password: <password> proxy_username: <proxyusernme> proxy_password: <proxypassword>",
"--- - name: Register using proxy hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_proxy: hostname: proxy.example.com port: 3128 username: \"{{ proxy_username }}\" password: \"{{ proxy_password }}\"",
"--- - name: To stop using proxy server for registration hosts: managed-node-01.example.com vars_files: - vault.yml vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_proxy: {\"state\":\"absent\"} roles: - role: rhel-system-roles.rhc",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"username: <username> password: <password>",
"--- - name: Disable Red Hat Insights autoupdates hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_insights: autoupdate: false state: present",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Disable remediation hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_insights: remediation: absent state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"username: <username> password: <password>",
"--- - name: Creating tags hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_insights: tags: group: group-name-value location: location-name-value description: - RHEL8 - SAP sample_key:value state: present",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Unregister the system hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_state: absent",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"cp /usr/share/doc/rhel-system-roles/selinux/example-selinux-playbook.yml <my-selinux-playbook.yml> vi <my-selinux-playbook.yml>",
"selinux_modules: - { path: \"selinux-local-1.pp\", priority: \"400\" }",
"ansible-playbook <my-selinux-playbook.yml> --syntax-check",
"ansible-playbook <my-selinux-playbook.yml>",
"- name: Allow Apache to listen on tcp port <port_number> community.general.seport: ports: <port_number> proto: tcp setype: http_port_t state: present",
"--- - name: Modify SELinux port mapping example hosts: all vars: # Map tcp port <port_number> to the 'http_port_t' SELinux port type selinux_ports: - ports: <port_number> proto: tcp setype: http_port_t state: present tasks: - name: Include selinux role ansible.builtin.include_role: name: rhel-system-roles.selinux",
"semanage port --list | grep http_port_t http_port_t tcp <port_number> , 80, 81, 443, 488, 8008, 8009, 8443, 9000",
"sshd_ListenAddress: - 0.0.0.0 - '::'",
"ListenAddress 0.0.0.0 ListenAddress ::",
"--- - name: SSH server configuration hosts: managed-node-01.example.com tasks: - name: Configure sshd to prevent root and password login except from particular subnet ansible.builtin.include_role: name: rhel-system-roles.sshd vars: sshd: PermitRootLogin: no PasswordAuthentication: no Match: - Condition: \"Address 192.0.2.0/24\" PermitRootLogin: yes PasswordAuthentication: yes",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ssh <username> @ <ssh_server>",
"cat /etc/ssh/sshd_config PasswordAuthentication no PermitRootLogin no Match Address 192.0.2.0/24 PasswordAuthentication yes PermitRootLogin yes",
"hostname -I 192.0.2.1",
"ssh root@ <ssh_server>",
"--- - name: Non-exclusive sshd configuration hosts: managed-node-01.example.com tasks: - name: Configure SSHD to accept environment variables ansible.builtin.include_role: name: rhel-system-roles.sshd vars: sshd_config_namespace: <my-application> sshd: # Environment variables to accept AcceptEnv: LANG LS_COLORS EDITOR",
"- name: Non-exclusive sshd configuration hosts: managed-node-01.example.com tasks: - name: Configure sshd to accept environment variables ansible.builtin.include_role: name: rhel-system-roles.sshd vars: sshd_config_file: /etc/ssh/sshd_config.d/ <42-my-application> .conf sshd: # Environment variables to accept AcceptEnv: LANG LS_COLORS EDITOR",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"cat /etc/ssh/sshd_config BEGIN sshd system role managed block: namespace <my-application> Match all AcceptEnv LANG LS_COLORS EDITOR END sshd system role managed block: namespace <my-application>",
"cat /etc/ssh/sshd_config.d/42-my-application.conf Ansible managed # AcceptEnv LANG LS_COLORS EDITOR",
"- name: Deploy SSH configuration for OpenSSH server hosts: managed-node-01.example.com tasks: - name: Overriding the system-wide cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.sshd vars: sshd_sysconfig: true sshd_sysconfig_override_crypto_policy: true sshd_KexAlgorithms: ecdh-sha2-nistp521 sshd_Ciphers: aes256-ctr sshd_MACs: [email protected] sshd_HostKeyAlgorithms: rsa-sha2-512,rsa-sha2-256",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ssh -vvv <ssh_server> debug2: peer server KEXINIT proposal debug2: KEX algorithms: ecdh-sha2-nistp521 debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256 debug2: ciphers ctos: aes256-ctr debug2: ciphers stoc: aes256-ctr debug2: MACs ctos: [email protected] debug2: MACs stoc: [email protected]",
"LocalForward: - 22 localhost:2222 - 403 localhost:4003",
"LocalForward 22 localhost:2222 LocalForward 403 localhost:4003",
"--- - name: SSH client configuration hosts: managed-node-01.example.com tasks: - name: Configure ssh clients ansible.builtin.include_role: name: rhel-system-roles.ssh vars: ssh_user: root ssh: Compression: true GSSAPIAuthentication: no ControlMaster: auto ControlPath: ~/.ssh/.cm%C Host: - Condition: example Hostname: server.example.com User: user1 ssh_ForwardX11: no",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"cat ~/root/.ssh/config Ansible managed Compression yes ControlMaster auto ControlPath ~/.ssh/.cm%C ForwardX11 no GSSAPIAuthentication no Host example Hostname example.com User user1",
"--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create logical volume ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lvs myvg'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Enable online block discard ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'",
"--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - hosts: all roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Create a disk device with swap hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure LVM pool with RAID ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lsblk'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure stripe size for RAID LVM volumes ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs raid_level: raid0 raid_stripe_size: \"256 KiB\" state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create LVM-VDO volume under volume group 'myvg' ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state' LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert VDOCompression VDOCompressionState VDODeduplication VDOIndexState mylv1 myvg vwi-a-v--- 3.00t vpool0 enabled online enabled online",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"luks_password: <password>",
"--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: \"{{ luks_password }}\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5c",
"ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c' /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb",
"ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb' LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 4e4e7970-1822-470e-b55a-e91efe5d0f5c Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes]",
"--- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Managing systemd services hosts: managed-node-01.example.com tasks: - name: Perform action on systemd units ansible.builtin.include_role: name: rhel-system-roles.systemd vars: systemd_started_units: - <systemd_unit_1> .service systemd_stopped_units: - <systemd_unit_2> .service systemd_restarted_units: - <systemd_unit_3> .service systemd_reloaded_units: - <systemd_unit_4> .service systemd_enabled_units: - <systemd_unit_5> .service systemd_disabled_units: - <systemd_unit_6> .service systemd_masked_units: - <systemd_unit_7> .service systemd_unmasked_units: - <systemd_unit_8> .service",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"{{ ansible_managed | comment }} [Unit] After= After=network.target sshd-keygen.target network-online.target",
"--- - name: Managing systemd services hosts: managed-node-01.example.com tasks: - name: Deploy an sshd.service systemd drop-in file ansible.builtin.include_role: name: rhel-system-roles.systemd vars: systemd_dropins: - sshd.service.conf.j2",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ls /etc/systemd/system/sshd.service.d/' 99-override.conf",
"{{ ansible_managed | comment }} [Unit] Description=Example systemd service unit file [Service] ExecStart=/bin/true",
"--- - name: Managing systemd services hosts: managed-node-01.example.com tasks: - name: Deploy, enable, and start a custom systemd service ansible.builtin.include_role: name: rhel-system-roles.systemd vars: systemd_unit_file_templates: - example.service.j2 systemd_enabled_units: - example.service systemd_started_units: - example.service",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'systemctl status example.service' ● example.service - A service for demonstrating purposes Loaded: loaded (/etc/systemd/system/example.service; enabled ; vendor preset: disabled) Active: active (running) since Thu 2024-07-04 15:59:18 CEST; 10min ago",
"--- - name: Managing time synchronization hosts: managed-node-01.example.com tasks: - name: Configuring NTP with an internal server (preferred) and a public server pool as fallback ansible.builtin.include_role: name: rhel-system-roles.timesync vars: timesync_ntp_servers: - hostname: time.example.com trusted: yes prefer: yes iburst: yes - hostname: 0.rhel.pool.ntp.org pool: yes iburst: yes",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'chronyc sources' MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* time.example.com 1 10 377 210 +159us[ +55us] +/- 12ms ^? ntp.example.org 2 9 377 409 +1120us[+1021us] +/- 42ms ^? time.us.example.net 2 9 377 992 -329us[ -386us] +/- 15ms",
"ansible managed-node-01.example.com -m command -a 'ntpq -p' remote refid st t when poll reach delay offset jitter ============================================================================== *time.example.com .PTB. 1 u 2 64 77 23.585 967.902 0.684 - ntp.example.or 192.0.2.17 2 u - 64 77 27.090 966.755 0.468 +time.us.example 198.51.100.19 2 u 65 64 37 18.497 968.463 1.588",
"--- - name: Managing time synchronization hosts: managed-node-01.example.com tasks: - name: Configuring NTP with NTS-enabled servers ansible.builtin.include_role: name: rhel-system-roles.timesync vars: timesync_ntp_servers: - hostname: ptbtime1.ptb.de nts: yes iburst: yes",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'chronyc sources' MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* ptbtime1.ptb.de 1 6 17 55 -13us[ -54us] +/- 12ms ^- ptbtime2.ptb.de 1 6 17 56 -257us[ -297us] +/- 12ms",
"ansible managed-node-01.example.com -m command -a 'chronyc -N authdata' Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen ========================================================================= ptbtime1.ptb.de NTS 1 15 256 229 0 0 8 100 ptbtime2.ptb.de NTS 1 15 256 230 0 0 8 100",
"ansible managed-node-01.example.com -m command -a 'ntpq -p' remote refid st t when poll reach delay offset jitter ============================================================================== *ptbtime1.ptb.de .PTB. 1 8 2 64 77 23.585 967.902 0.684 -ptbtime2.ptb.de .PTB. 1 8 30 64 78 24.653 993.937 0.765",
"--- - name: Deploy session recording hosts: managed-node-01.example.com tasks: - name: Enable session recording for specific users ansible.builtin.include_role: name: rhel-system-roles.tlog vars: tlog_scope_sssd: some tlog_users_sssd: - <recorded_user>",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"cd /etc/sssd/conf.d/sssd-session-recording.conf",
"journalctl _COMM=tlog-rec-sessio Nov 12 09:17:30 managed-node-01.example.com -tlog-rec-session[1546]: {\"ver\":\"2.3\",\"host\":\"managed-node-01.example.com\",\"rec\":\"07418f2b0f334c1696c10cbe6f6f31a6-60a-e4a2\",\"user\":\"demo-user\",",
"tlog-play -r journal -M TLOG_REC= <recording_id>",
"--- - name: Deploy session recording excluding users and groups hosts: managed-node-01.example.com tasks: - name: Exclude users and groups ansible.builtin.include_role: name: rhel-system-roles.tlog vars: tlog_scope_sssd: all tlog_exclude_users_sssd: - jeff - james tlog_exclude_groups_sssd: - admins",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"cat /etc/sssd/conf.d/sssd-session-recording.conf",
"journalctl _COMM=tlog-rec-sessio Nov 12 09:17:30 managed-node-01.example.com -tlog-rec-session[1546]: {\"ver\":\"2.3\",\"host\":\"managed-node-01.example.com\",\"rec\":\"07418f2b0f334c1696c10cbe6f6f31a6-60a-e4a2\",\"user\":\"demo-user\",",
"tlog-play -r journal -M TLOG_REC= <recording_id>",
"- name: Host to host VPN hosts: managed-node-01.example.com, managed-node-02.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - hosts: managed-node-01.example.com: managed-node-02.example.com: vpn_manage_firewall: true vpn_manage_selinux: true",
"vpn_connections: - hosts: managed-node-01.example.com: <external_node> : hostname: <IP_address_or_hostname>",
"- name: Multiple VPN hosts: managed-node-01.example.com, managed-node-02.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - name: control_plane_vpn hosts: managed-node-01.example.com: hostname: 192.0.2.0 # IP for the control plane managed-node-02.example.com: hostname: 192.0.2.1 - name: data_plane_vpn hosts: managed-node-01.example.com: hostname: 10.0.0.1 # IP for the data plane managed-node-02.example.com: hostname: 10.0.0.2",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ipsec status | grep <connection_name>",
"ipsec trafficstatus | grep <connection_name>",
"ipsec auto --add <connection_name>",
"- name: Mesh VPN hosts: managed-node-01.example.com, managed-node-02.example.com, managed-node-03.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - opportunistic: true auth_method: cert policies: - policy: private cidr: default - policy: private-or-clear cidr: 198.51.100.0/24 - policy: private cidr: 192.0.2.0/24 - policy: clear cidr: 192.0.2.7/32 vpn_manage_firewall: true vpn_manage_selinux: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"sa_pwd: <sa_password>",
"--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with an existing private key and certificate ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_tls_enable: true mssql_tls_cert: sql_crt.pem mssql_tls_private_key: sql_cert.key mssql_tls_version: 1.2 mssql_tls_force: true",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"/opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U \"sa\" -P <sa_password> -Q 'SELECT SYSTEM_USER'",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"sa_pwd: <sa_password>",
"--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with certificates issued by Red Hat IdM ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_tls_enable: true mssql_tls_certificates: - name: sql_cert dns: server.example.com ca: ipa",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"/opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U \"sa\" -P <sa_password> -Q 'SELECT SYSTEM_USER'",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"sa_pwd: <sa_password>",
"--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with custom storage paths ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_datadir: /var/lib/mssql/ mssql_datadir_mode: '0700' mssql_logdir: /var/log/mssql/ mssql_logdir_mode: '0700'",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ls -ld /var/lib/mssql/' drwx------. 12 mssql mssql 4096 Jul 3 13:53 /var/lib/mssql/",
"ansible managed-node-01.example.com -m command -a 'ls -ld /var/log/mssql/' drwx------. 12 mssql mssql 4096 Jul 3 13:53 /var/log/mssql/",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"sa_pwd: <sa_password> sql_pwd: <SQL_AD_password> ad_admin_pwd: <AD_admin_password>",
"--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with AD authentication ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_ad_configure: true mssql_ad_join: true mssql_ad_sql_user: sqluser mssql_ad_sql_password: \"{{ sql_pwd }}\" ad_integration_realm: ad.example.com ad_integration_user: Administrator ad_integration_password: \"{{ ad_admin_pwd }}\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"C:\\> Set-ADUser -Identity sqluser -KerberosEncryptionType AES128,AES256",
"kinit [email protected]",
"/opt/mssql-tools/bin/sqlcmd -S. -Q 'CREATE LOGIN [AD\\<AD_user>] FROM WINDOWS;'",
"kinit <AD_user> @ad.example.com",
"/opt/mssql-tools/bin/sqlcmd -S. -Q 'SELECT SYSTEM_USER'"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/automating_system_administration_by_using_rhel_system_roles/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.