title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 6. Red Hat Decision Manager roles and users | Chapter 6. Red Hat Decision Manager roles and users To access Business Central or KIE Server, you must create users and assign them appropriate roles before the servers are started. You can create users and roles when you install Business Central or KIE Server. If both Business Central and KIE Server are running on a single instance, a user who is authenticated for Business Central can also access KIE Server. However, if Business Central and KIE Server are running on different instances, a user who is authenticated for Business Central must be authenticated separately to access KIE Server. For example, if a user who is authenticated on Business Central but not authenticated on KIE Server tries to view or manage process definitions in Business Central, a 401 error is logged in the log file and the Invalid credentials to load data from remote server. Contact your system administrator. message appears in Business Central. This section describes Red Hat Decision Manager user roles. Note The admin , analyst , and rest-all roles are reserved for Business Central. The kie-server role is reserved for KIE Server. For this reason, the available roles can differ depending on whether Business Central, KIE Server, or both are installed. admin : Users with the admin role are the Business Central administrators. They can manage users and create, clone, and manage repositories. They have full access to make required changes in the application. Users with the admin role have access to all areas within Red Hat Decision Manager. analyst : Users with the analyst role have access to all high-level features. They can model projects. However, these users cannot add contributors to spaces or delete spaces in the Design Projects view. Access to the Deploy Execution Servers view, which is intended for administrators, is not available to users with the analyst role. However, the Deploy button is available to these users when they access the Library perspective. rest-all : Users with the rest-all role can access Business Central REST capabilities. kie-server : Users with the kie-server role can access KIE Server REST capabilities. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/roles-users-con_planning |
22.2. NTP Strata | 22.2. NTP Strata NTP servers are classified according to their synchronization distance from the atomic clocks which are the source of the time signals. The servers are thought of as being arranged in layers, or strata, from 1 at the top down to 15. Hence the word stratum is used when referring to a specific layer. Atomic clocks are referred to as Stratum 0 as this is the source, but no Stratum 0 packet is sent on the Internet, all stratum 0 atomic clocks are attached to a server which is referred to as stratum 1. These servers send out packets marked as Stratum 1. A server which is synchronized by means of packets marked stratum n belongs to the , lower, stratum and will mark its packets as stratum n+1 . Servers of the same stratum can exchange packets with each other but are still designated as belonging to just the one stratum, the stratum one below the best reference they are synchronized to. The designation Stratum 16 is used to indicate that the server is not currently synchronized to a reliable time source. Note that by default NTP clients act as servers for those systems in the stratum below them. Here is a summary of the NTP Strata: Stratum 0: Atomic Clocks and their signals broadcast over Radio and GPS GPS (Global Positioning System) Mobile Phone Systems Low Frequency Radio Broadcasts WWVB (Colorado, USA.), JJY-40 and JJY-60 (Japan), DCF77 (Germany), and MSF (United Kingdom) These signals can be received by dedicated devices and are usually connected by RS-232 to a system used as an organizational or site-wide time server. Stratum 1: Computer with radio clock, GPS clock, or atomic clock attached Stratum 2: Reads from stratum 1; Serves to lower strata Stratum 3: Reads from stratum 2; Serves to lower strata Stratum n+1 : Reads from stratum n ; Serves to lower strata Stratum 15: Reads from stratum 14; This is the lowest stratum. This process continues down to Stratum 15 which is the lowest valid stratum. The label Stratum 16 is used to indicated an unsynchronized state. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-ntp_strata |
Chapter 5. The WS-Policy Framework | Chapter 5. The WS-Policy Framework Abstract This chapter provides an introduction to the basic concepts of the WS-Policy framework, defining policy subjects and policy assertions, and explaining how policy assertions can be combined to make policy expressions. 5.1. Introduction to WS-Policy Overview The WS-Policy specification provides a general framework for applying policies that modify the semantics of connections and communications at runtime in a Web services application. Apache CXF security uses the WS-Policy framework to configure message protection and authentication requirements. Policies and policy references The simplest way to specify a policy is to embed it directly where you want to apply it. For example, to associate a policy with a specific port in the WSDL contract, you can specify it as follows: An alternative way to specify a policy is to insert a policy reference element, wsp:PolicyReference , at the point where you want to apply the policy and then insert the policy element, wsp:Policy , at some other point in the XML file. For example, to associate a policy with a specific port using a policy reference, you could use a configuration like the following: Where the policy reference, wsp:PolicyReference , locates the referenced policy using the ID, PolicyID (note the addition of the # prefix character in the URI attribute). The policy itself, wsp:Policy , must be identified by adding the attribute, wsu:Id=" PolicyID " . Policy subjects The entities with which policies are associated are called policy subjects . For example, you can associate a policy with an endpoint, in which case the endpoint is the policy subject. It is possible to associate multiple policies with any given policy subject. The WS-Policy framework supports the following kinds of policy subject: the section called "Service policy subject" . the section called "Endpoint policy subject" . the section called "Operation policy subject" . the section called "Message policy subject" . Service policy subject To associate a policy with a service, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of the following WSDL 1.1 element: wsdl:service -apply the policy to all of the ports (endpoints) offered by this service. Endpoint policy subject To associate a policy with an endpoint, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of any of the following WSDL 1.1 elements: wsdl:portType -apply the policy to all of the ports (endpoints) that use this port type. wsdl:binding -apply the policy to all of the ports that use this binding. wsdl:port -apply the policy to this endpoint only. For example, you can associate a policy with an endpoint binding as follows (using a policy reference): Operation policy subject To associate a policy with an operation, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of any of the following WSDL 1.1 elements: wsdl:portType/wsdl:operation wsdl:binding/wsdl:operation For example, you can associate a policy with an operation in a binding as follows (using a policy reference): Message policy subject To associate a policy with a message, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of any of the following WSDL 1.1 elements: wsdl:message wsdl:portType/wsdl:operation/wsdl:input wsdl:portType/wsdl:operation/wsdl:output wsdl:portType/wsdl:operation/wsdl:fault wsdl:binding/wsdl:operation/wsdl:input wsdl:binding/wsdl:operation/wsdl:output wsdl:binding/wsdl:operation/wsdl:fault For example, you can associate a policy with a message in a binding as follows (using a policy reference): 5.2. Policy Expressions Overview In general, a wsp:Policy element is composed of multiple different policy settings (where individual policy settings are specified as policy assertions ). Hence, the policy defined by a wsp:Policy element is really a composite object. The content of the wsp:Policy element is called a policy expression , where the policy expression consists of various logical combinations of the basic policy assertions. By tailoring the syntax of the policy expression, you can determine what combinations of policy assertions must be satisfied at runtime in order to satisfy the policy overall. This section describes the syntax and semantics of policy expressions in detail. Policy assertions Policy assertions are the basic building blocks that can be combined in various ways to produce a policy. A policy assertion has two key characteristics: it adds a basic unit of functionality to the policy subject and it represents a boolean assertion to be evaluated at runtime. For example, consider the following policy assertion that requires a WS-Security username token to be propagated with request messages: When associated with an endpoint policy subject, this policy assertion has the following effects: The Web service endpoint marshales/unmarshals the UsernameToken credentials. At runtime, the policy assertion returns true , if UsernameToken credentials are provided (on the client side) or received in the incoming message (on the server side); otherwise the policy assertion returns false . Note that if a policy assertion returns false , this does not necessarily result in an error. The net effect of a particular policy assertion depends on how it is inserted into a policy and on how it is combined with other policy assertions. Policy alternatives A policy is built up using policy assertions, which can additionally be qualified using the wsp:Optional attribute, and various nested combinations of the wsp:All and wsp:ExactlyOne elements. The net effect of composing these elements is to produce a range of acceptable policy alternatives . As long as one of these acceptable policy alternatives is satisfied, the overall policy is also satisified (evaluates to true ). wsp:All element When a list of policy assertions is wrapped by the wsp:All element, all of the policy assertions in the list must evaluate to true . For example, consider the following combination of authentication and authorization policy assertions: The preceding policy will be satisfied for a particular incoming request, if the following conditions both hold: WS-Security UsernameToken credentials must be present; and A SAML token must be present. Note The wsp:Policy element is semantically equivalent to wsp:All . Hence, if you removed the wsp:All element from the preceding example, you would obtain a semantically equivalent example wsp:ExactlyOne element When a list of policy assertions is wrapped by the wsp:ExactlyOne element, at least one of the policy assertions in the list must evaluate to true . The runtime goes through the list, evaluating policy assertions until it finds a policy assertion that returns true . At that point, the wsp:ExactlyOne expression is satisfied (returns true ) and any remaining policy assertions from the list will not be evaluated. For example, consider the following combination of authentication policy assertions: The preceding policy will be satisfied for a particular incoming request, if either of the following conditions hold: WS-Security UsernameToken credentials are present; or A SAML token is present. Note, in particular, that if both credential types are present, the policy would be satisfied after evaluating one of the assertions, but no guarantees can be given as to which of the policy assertions actually gets evaluated. The empty policy A special case is the empty policy , an example of which is shown in Example 5.1, "The Empty Policy" . Example 5.1. The Empty Policy Where the empty policy alternative, <wsp:All/> , represents an alternative for which no policy assertions need be satisfied. In other words, it always returns true . When <wsp:All/> is available as an alternative, the overall policy can be satisified even when no policy assertions are true . The null policy A special case is the null policy , an example of which is shown in Example 5.2, "The Null Policy" . Example 5.2. The Null Policy Where the null policy alternative, <wsp:ExactlyOne/> , represents an alternative that is never satisfied. In other words, it always returns false . Normal form In practice, by nesting the <wsp:All> and <wsp:ExactlyOne> elements, you can produce fairly complex policy expressions, whose policy alternatives might be difficult to work out. To facilitate the comparison of policy expressions, the WS-Policy specification defines a canonical or normal form for policy expressions, such that you can read off the list of policy alternatives unambiguously. Every valid policy expression can be reduced to the normal form. In general, a normal form policy expression conforms to the syntax shown in Example 5.3, "Normal Form Syntax" . Example 5.3. Normal Form Syntax Where each line of the form, <wsp:All>... </wsp:All> , represents a valid policy alternative. If one of these policy alternatives is satisfied, the policy is satisfied overall. | [
"<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:service name=\"PingService10\"> <wsdl:port name=\"UserNameOverTransport_IPingService\" binding=\" BindingName \"> <wsp:Policy> <!-- Policy expression comes here! --> </wsp:Policy> <soap:address location=\" SOAPAddress \"/> </wsdl:port> </wsdl:service> </wsdl:definitions>",
"<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:service name=\"PingService10\"> <wsdl:port name=\"UserNameOverTransport_IPingService\" binding=\" BindingName \"> <wsp:PolicyReference URI=\"#PolicyID\"/> <soap:address location=\" SOAPAddress \"/> </wsdl:port> </wsdl:service> <wsp:Policy wsu:Id=\"PolicyID\" > <!-- Policy expression comes here ... --> </wsp:Policy> </wsdl:definitions>",
"<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:binding name=\" EndpointBinding \" type=\"i0:IPingService\"> <wsp:PolicyReference URI=\"#PolicyID\"/> </wsdl:binding> <wsp:Policy wsu:Id=\"PolicyID\" > ... </wsp:Policy> </wsdl:definitions>",
"<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:binding name=\" EndpointBinding \" type=\"i0:IPingService\"> <wsdl:operation name=\"Ping\"> <wsp:PolicyReference URI=\"#PolicyID\"/> <soap:operation soapAction=\"http://xmlsoap.org/Ping\" style=\"document\"/> <wsdl:input name=\"PingRequest\"> ... </wsdl:input> <wsdl:output name=\"PingResponse\"> ... </wsdl:output> </wsdl:operation> </wsdl:binding> <wsp:Policy wsu:Id=\"PolicyID\" > ... </wsp:Policy> </wsdl:definitions>",
"<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:binding name=\" EndpointBinding \" type=\"i0:IPingService\"> <wsdl:operation name=\"Ping\"> <soap:operation soapAction=\"http://xmlsoap.org/Ping\" style=\"document\"/> <wsdl:input name=\"PingRequest\"> <wsp:PolicyReference URI=\"#PolicyID\"/> <soap:body use=\"literal\"/> </wsdl:input> <wsdl:output name=\"PingResponse\"> ... </wsdl:output> </wsdl:operation> </wsdl:binding> <wsp:Policy wsu:Id=\"PolicyID\" > ... </wsp:Policy> </wsdl:definitions>",
"<sp:SupportingTokens xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <wsp:Policy> <sp:UsernameToken/> </wsp:Policy> </sp:SupportingTokens>",
"<wsp:Policy wsu:Id=\"AuthenticateAndAuthorizeWSSUsernameTokenPolicy\"> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken/> </wsp:Policy> </sp:SupportingTokens> <sp:SupportingTokens> <wsp:Policy> <sp:SamlToken/> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:Policy>",
"<wsp:Policy wsu:Id=\"AuthenticateUsernamePasswordPolicy\"> <wsp:ExactlyOne> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken/> </wsp:Policy> </sp:SupportingTokens> <sp:SupportingTokens> <wsp:Policy> <sp:SamlToken/> </wsp:Policy> </sp:SupportingTokens> </wsp:ExactlyOne> </wsp:Policy>",
"<wsp:Policy ... > <wsp:ExactlyOne> <wsp:All/> </wsp:ExactlyOne> </wsp:Policy>",
"<wsp:Policy ... > <wsp:ExactlyOne/> </wsp:Policy>",
"<wsp:Policy ... > <wsp:ExactlyOne> <wsp:All> < Assertion .../> ... < Assertion .../> </wsp:All> <wsp:All> < Assertion .../> ... < Assertion .../> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/wspolicy |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_jlink_to_customize_java_runtime_environment/proc-providing-feedback-on-redhat-documentation |
Chapter 10. Diagnosing and Correcting Problems in a Cluster | Chapter 10. Diagnosing and Correcting Problems in a Cluster Clusters problems, by nature, can be difficult to troubleshoot. This is due to the increased complexity that a cluster of systems introduces as opposed to diagnosing issues on a single system. However, there are common issues that system administrators are more likely to encounter when deploying or administering a cluster. Understanding how to tackle those common issues can help make deploying and administering a cluster much easier. This chapter provides information about some common cluster issues and how to troubleshoot them. Additional help can be found in our knowledge base and by contacting an authorized Red Hat support representative. If your issue is related to the GFS2 file system specifically, you can find information about troubleshooting common GFS2 issues in the Global File System 2 document. 10.1. Configuration Changes Do Not Take Effect When you make changes to a cluster configuration, you must propagate those changes to every node in the cluster. When you configure a cluster using Conga , Conga propagates the changes automatically when you apply the changes. For information on propagating changes to cluster configuration with the ccs command, see Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . For information on propagating changes to cluster configuration with command line tools, see Section 9.4, "Updating a Configuration" . If you make any of the following configuration changes to your cluster, it is not necessary to restart the cluster after propagating those changes the changes to take effect. Deleting a node from the cluster configuration- except where the node count changes from greater than two nodes to two nodes. Adding a node to the cluster configuration- except where the node count changes from two nodes to greater than two nodes. Changing the logging settings. Adding, editing, or deleting HA services or VM components. Adding, editing, or deleting cluster resources. Adding, editing, or deleting failover domains. Changing any corosync or openais timers. If you make any other configuration changes to your cluster, however, you must restart the cluster to implement those changes. The following cluster configuration changes require a cluster restart to take effect: Adding or removing the two_node option from the cluster configuration file. Renaming the cluster. Adding, changing, or deleting heuristics for quorum disk, changing any quorum disk timers, or changing the quorum disk device. For these changes to take effect, a global restart of the qdiskd daemon is required. Changing the central_processing mode for rgmanager . For this change to take effect, a global restart of rgmanager is required. Changing the multicast address. Switching the transport mode from UDP multicast to UDP unicast, or switching from UDP unicast to UDP multicast. You can restart the cluster using Conga , the ccs command, or command line tools, For information on restarting a cluster with Conga , see Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" . For information on restarting a cluster with the ccs command, see Section 7.2, "Starting and Stopping a Cluster" . For information on restarting a cluster with command line tools, see Section 9.1, "Starting and Stopping the Cluster Software" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-troubleshoot-ca |
Chapter 6. Bare Metal (ironic) Parameters | Chapter 6. Bare Metal (ironic) Parameters You can modify the ironic service with bare metal parameters. Parameter Description AdditionalArchitectures List of additional architectures to enable. ApacheCertificateKeySize Override the private key size used when creating the certificate for this service. CertificateKeySize Specifies the private key size used when creating the certificate. The default value is 2048 . IPAImageURLs IPA image URLs, the format should be ["http://path/to/kernel", "http://path/to/ramdisk"]. IronicAutomatedClean Enables or disables automated cleaning. Disabling automated cleaning might result in security problems and deployment failures on rebuilds. Do not set to False unless you understand the consequences of disabling this feature. The default value is True . IronicCleaningDiskErase Type of disk cleaning before and between deployments. full for full cleaning. metadata to clean only disk metadata (partition table). The default value is full . IronicCleaningNetwork Name or UUID of the overcloud network used for cleaning bare metal nodes. Set to provisioning during the initial deployment (when no networks are created yet) and change to an actual UUID in a post-deployment stack update. The default value is provisioning . IronicConductorGroup The name of an OpenStack Bare Metal (ironic) Conductor Group. IronicConfigureSwiftTempUrlKey Whether to configure Swift temporary URLs for use with the "direct" and "ansible" deploy interfaces. The default value is True . IronicCorsAllowedOrigin Indicate whether this resource may be shared with the domain received in the request "origin" header. IronicDefaultBootOption How to boot the bare metal instances. Set to local to use local bootloader (requires grub2 for partition images). Set to netboot to make the instances boot from controllers using PXE/iPXE. The default value is local . IronicDefaultDeployInterface Deploy interface implementation to use by default. Leave empty to use the hardware type default. IronicDefaultInspectInterface Inspect interface implementation to use by default. Leave empty to use the hardware type default. IronicDefaultNetworkInterface Network interface implementation to use by default. Set to flat to use one flat provider network. Set to neutron to make OpenStack Bare Metal (ironic) interact with the OpenStack Networking (neutron) ML2 driver to enable other network types and certain advanced networking features. Requires IronicProvisioningNetwork to be correctly set. The default value is flat . IronicDefaultRescueInterface Default rescue implementation to use. The "agent" rescue requires a compatible ramdisk to be used. The default value is agent . IronicDefaultResourceClass Default resource class to use for new nodes. IronicDeployLogsStorageBackend Backend to use to store ramdisk logs, either "local" or "swift". The default value is local . IronicDhcpv6StatefulAddressCount Number of IPv6 addresses to allocate for ports created for provisioning, cleaning, rescue or inspection on DHCPv6-stateful networks. Different stages of the chain-loading process will request addresses with different CLID/IAID. Due to non- identical identifiers multiple addresses must be reserved for the host to ensure each step of the boot process can successfully lease addresses. The default value is 4 . IronicEnabledBiosInterfaces Enabled BIOS interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['no-bios'] . IronicEnabledBootInterfaces Enabled boot interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['ipxe', 'pxe'] . IronicEnabledConsoleInterfaces Enabled console interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['ipmitool-socat', 'no-console'] . IronicEnabledDeployInterfaces Enabled deploy interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['iscsi', 'direct'] . IronicEnabledHardwareTypes Enabled OpenStack Bare Metal (ironic) hardware types. The default value is ['ipmi', 'redfish'] . IronicEnabledInspectInterfaces Enabled inspect interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['no-inspect'] . IronicEnabledManagementInterfaces Enabled management interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['ipmitool', 'noop', 'redfish'] . IronicEnabledNetworkInterfaces Enabled network interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['flat', 'neutron'] . IronicEnabledPowerInterfaces Enabled power interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['ipmitool', 'redfish'] . IronicEnabledRaidInterfaces Enabled RAID interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['no-raid', 'agent'] . IronicEnabledRescueInterfaces Enabled rescue interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['no-rescue', 'agent'] . IronicEnabledStorageInterfaces Enabled storage interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['cinder', 'noop'] . IronicEnabledVendorInterfaces Enabled vendor interface implementations. Each hardware type must have at least one valid implementation enabled. The default value is ['ipmitool', 'no-vendor'] . IronicEnableStagingDrivers Whether to enable use of staging drivers. The default value is False . IronicForcePowerStateDuringSync Whether to force power state during sync. The default value is True . IronicImageDownloadSource Image delivery method for the "direct" deploy interface. Use "swift" for the Object Storage temporary URLs, use "http" for the local HTTP server (the same as for iPXE). The default value is swift . IronicInspectorCollectors Comma-separated list of IPA inspection collectors. The default value is default,logs . IronicInspectorDiscoveryDefaultDriver The default driver to use for newly discovered nodes (requires IronicInspectorEnableNodeDiscovery set to True). This driver is automatically added to enabled_drivers. The default value is ipmi . IronicInspectorEnableNodeDiscovery Makes ironic-inspector enroll any unknown node that PXE-boots introspection ramdisk in OpenStack Bare Metal (ironic). The default driver to use for new nodes is specified by the IronicInspectorDiscoveryDefaultDriver parameter. Introspection rules can also be used to specify it. The default value is False . IronicInspectorExtraProcessingHooks Comma-separated list of processing hooks to append to the default list. The default value is extra_hardware,lldp_basic,local_link_connection . IronicInspectorInterface Network interface on which inspection dnsmasq will listen. The default value is br-ex . IronicInspectorIpRange Temporary IP range that will be given to nodes during the inspection process. This should not overlap with any range that OpenStack Networking (neutron) DHCP allocates, but it has to be routeable back to ironic-inspector . This option has no meaningful defaults, and thus is required. IronicInspectorIPXEEnabled Whether to use iPXE for inspection. The default value is True . IronicInspectorKernelArgs Kernel args for the OpenStack Bare Metal (ironic) inspector. The default value is ipa-inspection-dhcp-all-interfaces=1 ipa-collect-lldp=1 ipa-debug=1 . IronicInspectorSubnets Temporary IP ranges that will be given to nodes during the inspection process. These ranges should not overlap with any range that OpenStack Networking (neutron) DHCP provides, but they need to be routeable back to the ironic-inspector API. This option has no meaningful defaults and is required. IronicInspectorUseSwift Whether to use Swift for storing introspection data. The default value is True . IronicIpVersion The IP version that will be used for PXE booting. The default value is 4 . IronicIPXEEnabled Whether to use iPXE instead of PXE for deployment. The default value is True . IronicIPXEPort Port to use for serving images when iPXE is used. The default value is 8088 . IronicIPXETimeout IPXE timeout in second. Set to 0 for infinite timeout. The default value is 60 . IronicIPXEUefiSnpOnly Whether to use SNP (Simple Network Protocol) iPXE EFI, or not. When set to true ipxe-snponly EFI is used. The default value is True . IronicPassword The password for the Bare Metal service and database account. IronicPowerStateChangeTimeout Number of seconds to wait for power operations to complete, i.e., so that a baremetal node is in the desired power state. If timed out, the power operation is considered a failure. The default value is 60 . IronicProvisioningNetwork Name or UUID of the overcloud network used for provisioning of bare metal nodes if IronicDefaultNetworkInterface is set to neutron . Set to provisioning during the initial deployment (when no networks are created yet) and change to an actual UUID in a post-deployment stack update. The default value is provisioning . IronicRescuingNetwork Name or UUID of the overcloud network used for rescuing of bare metal nodes, if IronicDefaultRescueInterface is not set to "no-rescue". The default value of "provisioning" can be left during the initial deployment (when no networks are created yet) and should be changed to an actual UUID in a post-deployment stack update. The default value is provisioning . IronicRpcTransport The remote procedure call transport between conductor and API processes, such as a messaging broker or JSON RPC. MemcacheUseAdvancedPool Use the advanced (eventlet safe) memcached client pool. The default value is True . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/overcloud_parameters/ref_bare-metal-ironic-parameters_overcloud_parameters |
Transitioning to Containerized Services | Transitioning to Containerized Services Red Hat OpenStack Platform 16.0 A basic guide to working with OpenStack Platform containerized services OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/transitioning_to_containerized_services/index |
Chapter 4. Verifying successful CephFS through NFS deployment | Chapter 4. Verifying successful CephFS through NFS deployment When you deploy CephFS through NFS as a back end of the Shared File Systems service (manila), you add the following new elements to the overcloud environment: StorageNFS network Ceph MDS service on the controllers NFS-Ganesha service on the controllers As the cloud administrator, you must verify the stability of the CephFS through NFS environment before you make it available to service users. 4.1. Verifying creation of isolated StorageNFS network The network_data_ganesha.yaml file used to deploy CephFS through NFS as a Shared File Systems service back end creates the StorageNFS VLAN. Complete the following steps to verify the existence of the isolated StorageNFS network. Prerequisites Complete the steps in CephFS through NFS-Ganesha installation . Procedure Log in to one of the controllers in the overcloud. Enter the following command to check the connected networks and verify the existence of the VLAN as set in network_data_ganesha.yaml : 4.2. Verifying Ceph MDS service Use the systemctl status command to verify the Ceph MDS service status. Procedure Enter the following command on all Controller nodes to check the status of the MDS container: Example: 4.3. Verifying Ceph cluster status Complete the following steps to verify Ceph cluster status. Procedure Log in to the active Controller node. Enter the following command: There is one active MDS and two MDSs on standby. To check the status of the Ceph file system in more detail, enter the following command and replace <cephfs> with the name of the Ceph file system: 4.4. Verifying NFS-Ganesha and manila-share service status Complete the following step to verify the status of NFS-Ganesha and manila-share service. Procedure Enter the following command from one of the Controller nodes to confirm that ceph-nfs and openstack-manila-share started: 4.5. Verifying manila-api services acknowledges scheduler and share services Complete the following steps to confirm that the manila-api service acknowledges the scheduler and share services. Procedure Log in to the undercloud. Enter the following command: Enter the following command to confirm manila-scheduler and manila-share are enabled: | [
"ip a 15: vlan310: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 32:80:cf:0e:11:ca brd ff:ff:ff:ff:ff:ff inet 172.16.4.4/24 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet 172.16.4.7/32 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet6 fe80::3080:cfff:fe0e:11ca/64 scope link valid_lft forever preferred_lft forever",
"systemctl status ceph-mds<@CONTROLLER-HOST>",
"systemctl status [email protected] [email protected] - Ceph MDS Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-09-18 20:11:53 UTC; 6 days ago Main PID: 65066 (conmon) Tasks: 16 (limit: 204320) Memory: 38.2M CGroup: /system.slice/system-ceph\\x2dmds.slice/[email protected] └─60921 /usr/bin/podman run --rm --net=host --memory=32000m --cpus=4 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro>",
"sudo ceph -s cluster: id: 3369e280-7578-11e8-8ef3-801844eeec7c health: HEALTH_OK services: mon: 3 daemons, quorum overcloud-controller-1,overcloud-controller-2,overcloud-controller-0 mgr: overcloud-controller-1(active), standbys: overcloud-controller-2, overcloud-controller-0 mds: cephfs-1/1/1 up {0=overcloud-controller-0=up:active}, 2 up:standby osd: 6 osds: 6 up, 6 in",
"sudo ceph fs ls name: cephfs, metadata pool: manila_metadata, data pools: [manila_data]",
"pcs status ceph-nfs (systemd:ceph-nfs@pacemaker): Started overcloud-controller-1 container: openstack-manila-share [192.168.24.1:8787/rhosp-rhel8/openstack-manila-share:pcmklatest] openstack-manila-share-podman-0 (ocf::heartbeat:podman): Started overcloud-controller-1",
"source /home/stack/overcloudrc",
"manila service-list | Id | Binary | Host | Zone | Status | State | Updated_at | | 2 | manila-scheduler | hostgroup | nova | enabled | up | 2018-08-08T04:15:03.000000 | | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2018-08-08T04:15:03.000000 |"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/assembly-cephfs-verifying_cephfs-nfs |
Chapter 1. About CI/CD | Chapter 1. About CI/CD OpenShift Container Platform is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the OpenShift Container Platform provides the following CI/CD solutions: OpenShift Builds OpenShift Pipelines OpenShift GitOps 1.1. OpenShift Builds With OpenShift Builds, you can create cloud-native apps by using a declarative build process. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object typically builds a runnable image and pushes it to a container image registry. Docker build Source-to-image (S2I) build Custom build For more information, see Understanding image builds 1.2. OpenShift Pipelines OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. For more information, see Red Hat OpenShift Pipelines . 1.3. OpenShift GitOps OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. For more information, see Red Hat OpenShift GitOps . 1.4. Jenkins Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the OpenShift Container Platform. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart. For more information, see Configuring Jenkins images . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cicd_overview/ci-cd-overview |
7.2. sVirt Labeling | 7.2. sVirt Labeling Like other services under the protection of SELinux, sVirt uses process-based mechanisms and restrictions to provide an extra layer of security over guest instances. Under typical use, you should not even notice that sVirt is working in the background. This section describes the labeling features of sVirt. As shown in the following output, when using sVirt, each Virtual Machine (VM) process is labeled and runs with a dynamically generated level. Each process is isolated from other VMs with different levels: The actual disk images are automatically labeled to match the processes, as shown in the following output: The following table outlines the different labels that can be assigned when using sVirt: Table 7.1. sVirt Labels Type SELinux Context Description Virtual Machine Processes system_u:system_r:svirt_t:MCS1 MCS1 is a randomly selected MCS field. Currently approximately 500,000 labels are supported. Virtual Machine Image system_u:object_r:svirt_image_t:MCS1 Only processes labeled svirt_t with the same MCS fields are able to read/write these image files and devices. Virtual Machine Shared Read/Write Content system_u:object_r:svirt_image_t:s0 All processes labeled svirt_t are allowed to write to the svirt_image_t:s0 files and devices. Virtual Machine Image system_u:object_r:virt_content_t:s0 System default label used when an image exits. No svirt_t virtual processes are allowed to read files/devices with this label. It is also possible to perform static labeling when using sVirt. Static labels allow the administrator to select a specific label, including the MCS/MLS field, for a virtual machine. Administrators who run statically-labeled virtual machines are responsible for setting the correct label on the image files. The virtual machine will always be started with that label, and the sVirt system will never modify the label of a statically-labeled virtual machine's content. This allows the sVirt component to run in an MLS environment. You can also run multiple virtual machines with different sensitivity levels on a system, depending on your requirements. | [
"~]# ps -eZ | grep qemu system_u:system_r:svirt_t:s0:c87,c520 27950 ? 00:00:17 qemu-kvm system_u:system_r:svirt_t:s0:c639,c757 27989 ? 00:00:06 qemu-system-x86",
"~]# ls -lZ /var/lib/libvirt/images/* system_u:object_r:svirt_image_t:s0:c87,c520 image1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/ch07s02 |
Chapter 43. Hardware Enablement | Chapter 43. Hardware Enablement LSI Syncro CS HA-DAS adapters Red Hat Enterprise Linux 7.1 included code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter is provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.2 and later are encouraged to provide feedback to Red Hat and LSI. (BZ#1062759) tss2 enables TPM 2.0 for IBM Power LE The tss2 package adds IBM implementation of a Trusted Computing Group Software Stack (TSS) 2.0 as a Technology Preview for the IBM Power LE architecture. This package enables users to interact with TPM 2.0 devices. (BZ#1384452) ibmvnic Device Driver Starting with Red Hat Enterprise Linux 7.3, the ibmvnic Device Driver has been available as a Technology Preview for IBM POWER architectures. vNIC (Virtual Network Interface Controller) is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. (BZ# 1391561 , BZ#947163) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/technology_previews_hardware_enablement |
Chapter 4. Configuration | Chapter 4. Configuration Camel Quarkus automatically configures and deploys a Camel Context bean which by default is started/stopped according to the Quarkus Application lifecycle. The configuration step happens at build time during Quarkus' augmentation phase and it is driven by the Camel Quarkus extensions which can be tuned using Camel Quarkus specific quarkus.camel.* properties. Note quarkus.camel.* configuration properties are documented on the individual extension pages - for example see Camel Quarkus Core . After the configuration is done, a minimal Camel Runtime is assembled and started in the RUNTIME_INIT phase. 4.1. Configuring Camel components 4.1.1. application.properties To configure components and other aspects of Apache Camel through properties, make sure that your application depends on camel-quarkus-core directly or transitively. Because most Camel Quarkus extensions depend on camel-quarkus-core , you typically do not need to add it explicitly. camel-quarkus-core brings functionalities from Camel Main to Camel Quarkus. In the example below, you set a specific ExchangeFormatter configuration on the LogComponent via application.properties : camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false 4.1.2. CDI You can also configure a component programmatically using CDI. The recommended method is to observe the ComponentAddEvent and configure the component before the routes and the CamelContext are started: import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } } 4.1.2.1. Producing a @Named component instance Alternatively, you can create and configure the component yourself in a @Named producer method. This works as Camel uses the component URI scheme to look-up components from its registry. For example, in the case of a LogComponent Camel looks for a log named bean. Warning While producing a @Named component bean will usually work, it may cause subtle issues with some components. Camel Quarkus extensions may do one or more of the following: Pass custom subtype of the default Camel component type. See the Vert.x WebSocket extension example. Perform some Quarkus specific customization of the component. See the JPA extension example. These actions are not performed when you produce your own component instance, therefore, configuring components in an observer method is the recommended method. import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named("log") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } } 1 The "log" argument of the @Named annotation can be omitted if the name of the method is the same. 4.2. Configuration by convention In addition to support configuring Camel through properties, camel-quarkus-core allows you to use conventions to configure the Camel behavior. For example, if there is a single ExchangeFormatter instance in the CDI container, then it will automatically wire that bean to the LogComponent . Additional resources Configuring and using Metering in OpenShift Container Platform | [
"camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named(\"log\") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-configuration |
B.104. xguest | B.104. xguest B.104.1. RHBA-2010:0853 - xguest bug fix update An updated xguest package that fixes a bug is now available. The xguest package sets up the xguest user which can be used as a temporary account to switch to or as a kiosk user account. These accounts are disabled unless SELinux is in enforcing mode. Bug Fix BZ# 641811 Previously, xguest installed its 'sabayon' profile file in the wrong directory. This would cause packagekit and seapplet to be started by default for the xguest user. With this update, the 'sabayon' profile file is installed in the correct directory. All users of xguest are advised to upgrade to this updated package, which resolves this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/xguest |
Release notes for Red Hat build of OpenJDK 11.0.11 | Release notes for Red Hat build of OpenJDK 11.0.11 Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.11/index |
Preface | Preface Red Hat Quay is an enterprise-quality container registry. Use Quay to build and store containers, then deploy them to the servers across your enterprise. This procedure describes how to deploy a high availability, enterprise-quality Red Hat Quay setup. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploy_red_hat_quay_-_high_availability/pr01 |
Chapter 2. Installing a user-provisioned cluster on bare metal | Chapter 2. Installing a user-provisioned cluster on bare metal In OpenShift Container Platform 4.15, you can install a cluster on bare metal infrastructure that you provision. Important While you might be able to follow this procedure to deploy a cluster on virtualized or cloud environments, you must be aware of additional considerations for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in such an environment. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. Additional resources See Installing a user-provisioned bare metal cluster on a restricted network for more information about performing a restricted network installation on bare metal infrastructure that you provision. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 2.3.4. Requirements for baremetal clusters on vSphere Ensure you enable the disk.EnableUUID parameter on all virtual machines in your cluster. Additional resources See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for details on setting the disk.EnableUUID parameter's value to TRUE on VMware vSphere for user-provisioned infrastructure. 2.3.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Additional resources Validating DNS resolution for user-provisioned infrastructure 2.3.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. Additional resources Verifying node health 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 2.9.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. See Cluster capabilities for more information on enabling cluster capabilities that were disabled before installation. See Optional cluster capabilities in OpenShift Container Platform 4.15 for more information about the features provided by each capability. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note For bare metal installations, if you do not assign node IP addresses from the range that is specified in the networking.machineNetwork[].cidr field in the install-config.yaml file, you must include them in the proxy.noProxy field. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources See Recovering from expired control plane certificates for more information about recovering kubelet certificates. 2.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 2.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 2.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 2.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 2.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 2.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 2.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 2.11.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.15 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 2.11.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 2.11.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 2.11.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 2.11.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 2.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 2.11.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 2.11.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 2.11.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 2.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 2.11.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 2.11.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.11.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.11.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 2.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 2.11.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 2.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 2.11.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 2.11.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. Additional resources See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for more information on using special coreos.inst.* arguments to direct the live installer. 2.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 2.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 2.15.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.15.2.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.15.2.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 2.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.18. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_bare_metal/installing-bare-metal |
4.6. Securing Virtual Private Networks (VPNs) Using Libreswan | 4.6. Securing Virtual Private Networks (VPNs) Using Libreswan In Red Hat Enterprise Linux 7, a Virtual Private Network ( VPN ) can be configured using the IPsec protocol which is supported by the Libreswan application. Libreswan is a continuation of the Openswan application and many examples from the Openswan documentation are interchangeable with Libreswan . The NetworkManager IPsec plug-in is called NetworkManager-libreswan . Users of GNOME Shell should install the NetworkManager-libreswan-gnome package, which has NetworkManager-libreswan as a dependency. Note that the NetworkManager-libreswan-gnome package is only available from the Optional channel. See Enabling Supplementary and Optional Repositories . The IPsec protocol for VPN is itself configured using the Internet Key Exchange ( IKE ) protocol. The terms IPsec and IKE are used interchangeably. An IPsec VPN is also called an IKE VPN, IKEv2 VPN, XAUTH VPN, Cisco VPN or IKE/IPsec VPN. A variant of an IPsec VPN that also uses the Level 2 Tunneling Protocol ( L2TP ) is usually called an L2TP/IPsec VPN, which requires the Optional channel xl2tpd application. Libreswan is an open-source, user-space IKE implementation available in Red Hat Enterprise Linux 7. IKE version 1 and 2 are implemented as a user-level daemon. The IKE protocol itself is also encrypted. The IPsec protocol is implemented by the Linux kernel and Libreswan configures the kernel to add and remove VPN tunnel configurations. The IKE protocol uses UDP port 500 and 4500. The IPsec protocol consists of two different protocols, Encapsulated Security Payload ( ESP ) which has protocol number 50, and Authenticated Header ( AH ) which as protocol number 51. The AH protocol is not recommended for use. Users of AH are recommended to migrate to ESP with null encryption. The IPsec protocol has two different modes of operation, Tunnel Mode (the default) and Transport Mode . It is possible to configure the kernel with IPsec without IKE. This is called Manual Keying . It is possible to configure manual keying using the ip xfrm commands, however, this is strongly discouraged for security reasons. Libreswan interfaces with the Linux kernel using netlink. Packet encryption and decryption happen in the Linux kernel. Libreswan uses the Network Security Services ( NSS ) cryptographic library. Both libreswan and NSS are certified for use with the Federal Information Processing Standard ( FIPS ) Publication 140-2. Important IKE / IPsec VPNs, implemented by Libreswan and the Linux kernel, is the only VPN technology recommended for use in Red Hat Enterprise Linux 7. Do not use any other VPN technology without understanding the risks of doing so. 4.6.1. Installing Libreswan To install Libreswan , enter the following command as root : To check that Libreswan is installed: After a new installation of Libreswan , the NSS database should be initialized as part of the installation process. Before you start a new database, remove the old database as follows: Then, to initialize a new NSS database, enter the following command as root : Only when operating in FIPS mode, it is necessary to protect the NSS database with a password. To initialize the database for FIPS mode, instead of the command, use: To start the ipsec daemon provided by Libreswan , issue the following command as root : To confirm that the daemon is now running: To ensure that Libreswan will start when the system starts, issue the following command as root : Configure any intermediate as well as host-based firewalls to permit the ipsec service. See Chapter 5, Using Firewalls for information on firewalls and allowing specific services to pass through. Libreswan requires the firewall to allow the following packets: UDP port 500 and 4500 for the Internet Key Exchange ( IKE ) protocol Protocol 50 for Encapsulated Security Payload ( ESP ) IPsec packets Protocol 51 for Authenticated Header ( AH ) IPsec packets (uncommon) We present three examples of using Libreswan to set up an IPsec VPN. The first example is for connecting two hosts together so that they may communicate securely. The second example is connecting two sites together to form one network. The third example is supporting remote users, known as road warriors in this context. 4.6.2. Creating VPN Configurations Using Libreswan Libreswan does not use the terms " source " and " destination " or " server " and " client " since IKE/IPsec are peer to peer protocols. Instead, it uses the terms " left " and " right " to refer to end points (the hosts). This also allows the same configuration to be used on both end points in most cases, although a lot of administrators choose to always use " left " for the local host and " right " for the remote host. There are four commonly used methods for authentication of endpoints: Pre-Shared Keys ( PSK ) is the simplest authentication method. PSKs should consist of random characters and have a length of at least 20 characters. In FIPS mode, PSKs need to comply to a minimum strength requirement depending on the integrity algorithm used. It is recommended not to use PSKs shorter than 64 random characters. Raw RSA keys are commonly used for static host-to-host or subnet-to-subnet IPsec configurations. The hosts are manually configured with each other's public RSA key. This method does not scale well when dozens or more hosts all need to setup IPsec tunnels to each other. X.509 certificates are commonly used for large-scale deployments where there are many hosts that need to connect to a common IPsec gateway. A central certificate authority ( CA ) is used to sign RSA certificates for hosts or users. This central CA is responsible for relaying trust, including the revocations of individual hosts or users. NULL Authentication is used to gain mesh encryption without authentication. It protects against passive attacks but does not protect against active attacks. However, since IKEv2 allows asymmetrical authentication methods, NULL Authentication can also be used for internet scale Opportunistic IPsec, where clients authenticate the server, but servers do not authenticate the client. This model is similar to secure websites using TLS (also known as https:// websites). In addition to these authentication methods, an additional authentication can be added to protect against possible attacks by quantum computers. This additional authentication method is called Postquantum Preshared Keys ( PPK . Individual clients or groups of clients can use their own PPK by specifying a ( PPKID that corresponds to an out-of-band configured PreShared Key. See Section 4.6.9, "Using the Protection against Quantum Computers" . 4.6.3. Creating Host-To-Host VPN Using Libreswan To configure Libreswan to create a host-to-host IPsec VPN, between two hosts referred to as " left " and " right " , enter the following commands as root on both of the hosts ( " left " and " right " ) to create new raw RSA key pairs: This generates an RSA key pair for the host. The process of generating RSA keys can take many minutes, especially on virtual machines with low entropy. To view the host public key so it can be specified in a configuration as the " left " side, issue the following command as root on the host where the new hostkey was added, using the CKAID returned by the " newhostkey " command: You will need this key to add to the configuration file on both hosts as explained below. If you forgot the CKAID, you can obtain a list of all host keys on a machine using: The secret part of the keypair is stored inside the " NSS database " which resides in /etc/ipsec.d/*.db . To make a configuration file for this host-to-host tunnel, the lines leftrsasigkey= and rightrsasigkey= from above are added to a custom configuration file placed in the /etc/ipsec.d/ directory. Using an editor running as root , create a file with a suitable name in the following format: /etc/ipsec.d/my_host-to-host.conf Edit the file as follows: Public keys can also be configured by their CKAID instead of by their RSAID. In that case use " leftckaid= " instead of " leftrsasigkey= " You can use the identical configuration file on both left and right hosts. Libreswan automatically detects if it is " left " or " right " based on the specified IP addresses or hostnames. If one of the hosts is a mobile host, which implies the IP address is not known in advance, then on the mobile client use %defaultroute as its IP address. This will pick up the dynamic IP address automatically. On the static server host that accepts connections from incoming mobile hosts, specify the mobile host using %any for its IP address. Ensure the leftrsasigkey value is obtained from the " left " host and the rightrsasigkey value is obtained from the " right " host. The same applies when using leftckaid and rightckaid . Restart ipsec to ensure it reads the new configuration and if configured to start on boot, to confirm that the tunnels establish: When using the auto=start option, the IPsec tunnel should be established within a few seconds. You can manually load and start the tunnel by entering the following commands as root : 4.6.3.1. Verifying Host-To-Host VPN Using Libreswan The IKE negotiation takes place on UDP ports 500 and 4500. IPsec packets show up as Encapsulated Security Payload (ESP) packets. The ESP protocol has no ports. When the VPN connection needs to pass through a NAT router, the ESP packets are encapsulated in UDP packets on port 4500. To verify that packets are being sent through the VPN tunnel, issue a command as root in the following format: Where interface is the interface known to carry the traffic. To end the capture with tcpdump , press Ctrl + C . Note The tcpdump command interacts a little unexpectedly with IPsec . It only sees the outgoing encrypted packet, not the outgoing plaintext packet. It does see the encrypted incoming packet, as well as the decrypted incoming packet. If possible, run tcpdump on a router between the two machines and not on one of the endpoints itself. When using the Virtual Tunnel Interface (VTI), tcpdump on the physical interface shows ESP packets, while tcpdump on the VTI interface shows the cleartext traffic. To check the tunnel is succesfully established, and additionally see how much traffic has gone through the tunnel, enter the following command as root : 4.6.4. Configuring Site-to-Site VPN Using Libreswan In order for Libreswan to create a site-to-site IPsec VPN, joining together two networks, an IPsec tunnel is created between two hosts, endpoints, which are configured to permit traffic from one or more subnets to pass through. They can therefore be thought of as gateways to the remote portion of the network. The configuration of the site-to-site VPN only differs from the host-to-host VPN in that one or more networks or subnets must be specified in the configuration file. To configure Libreswan to create a site-to-site IPsec VPN, first configure a host-to-host IPsec VPN as described in Section 4.6.3, "Creating Host-To-Host VPN Using Libreswan" and then copy or move the file to a file with a suitable name, such as /etc/ipsec.d/my_site-to-site.conf . Using an editor running as root , edit the custom configuration file /etc/ipsec.d/my_site-to-site.conf as follows: To bring the tunnels up, restart Libreswan or manually load and initiate all the connections using the following commands as root : 4.6.4.1. Verifying Site-to-Site VPN Using Libreswan Verifying that packets are being sent through the VPN tunnel is the same procedure as explained in Section 4.6.3.1, "Verifying Host-To-Host VPN Using Libreswan" . 4.6.5. Configuring Site-to-Site Single Tunnel VPN Using Libreswan Often, when a site-to-site tunnel is built, the gateways need to communicate with each other using their internal IP addresses instead of their public IP addresses. This can be accomplished using a single tunnel. If the left host, with host name west , has internal IP address 192.0.1.254 and the right host, with host name east , has internal IP address 192.0.2.254 , store the following configuration using a single tunnel to the /etc/ipsec.d/myvpn.conf file on both servers: 4.6.6. Configuring Subnet Extrusion Using Libreswan IPsec is often deployed in a hub-and-spoke architecture. Each leaf node has an IP range that is part of a larger range. Leaves communicate with each other through the hub. This is called subnet extrusion . Example 4.2. Configuring Simple Subnet Extrusion Setup In the following example, we configure the head office with 10.0.0.0/8 and two branches that use a smaller /24 subnet. At the head office: At the " branch1 " office, we use the same connection. Additionally, we use a pass-through connection to exclude our local LAN traffic from being sent through the tunnel: 4.6.7. Configuring IKEv2 Remote Access VPN Libreswan Road warriors are traveling users with mobile clients with a dynamically assigned IP address, such as laptops. These are authenticated using certificates. To avoid needing to use the old IKEv1 XAUTH protocol, IKEv2 is used in the following example: On the server: Where: left= 1.2.3.4 The 1.2.3.4 value specifies the actual IP address or host name of your server. leftcert=vpn-server.example.com This option specifies a certificate referring to its friendly name or nickname that has been used to import the certificate. Usually, the name is generated as a part of a PKCS #12 certificate bundle in the form of a .p12 file. See the pkcs12(1) and pk12util(1) man pages for more information. On the mobile client, the road warrior's device, use a slight variation of the configuration: Where: auto=start This option enables the user to connect to the VPN whenever the ipsec system service is started. Replace it with the auto=add if you want to establish the connection later. 4.6.8. Configuring IKEv1 Remote Access VPN Libreswan and XAUTH with X.509 Libreswan offers a method to natively assign IP address and DNS information to roaming VPN clients as the connection is established by using the XAUTH IPsec extension. Extended authentication (XAUTH) can be deployed using PSK or X.509 certificates. Deploying using X.509 is more secure. Client certificates can be revoked by a certificate revocation list or by Online Certificate Status Protocol ( OCSP ). With X.509 certificates, individual clients cannot impersonate the server. With a PSK, also called Group Password, this is theoretically possible. XAUTH requires the VPN client to additionally identify itself with a user name and password. For One time Passwords (OTP), such as Google Authenticator or RSA SecureID tokens, the one-time token is appended to the user password. There are three possible back ends for XAUTH: xauthby=pam This uses the configuration in /etc/pam.d/pluto to authenticate the user. Pluggable Authentication Modules (PAM) can be configured to use various back ends by itself. It can use the system account user-password scheme, an LDAP directory, a RADIUS server or a custom password authentication module. See the Using Pluggable Authentication Modules (PAM) chapter for more information. xauthby=file This uses the /etc/ipsec.d/passwd configuration file (it should not be confused with the /etc/ipsec.d/nsspassword file). The format of this file is similar to the Apache .htpasswd file and the Apache htpasswd command can be used to create entries in this file. However, after the user name and password, a third column is required with the connection name of the IPsec connection used, for example when using a conn remoteusers to offer VPN to remove users, a password file entry should look as follows: user1:USDapr1USDMIwQ3DHbUSD1I69LzTnZhnCT2DPQmAOK.:remoteusers Note When using the htpasswd command, the connection name has to be manually added after the user:password part on each line. xauthby=alwaysok The server always pretends the XAUTH user and password combination is correct. The client still has to specify a user name and a password, although the server ignores these. This should only be used when users are already identified by X.509 certificates, or when testing the VPN without needing an XAUTH back end. An example server configuration with X.509 certificates: When xauthfail is set to soft, instead of hard, authentication failures are ignored, and the VPN is setup as if the user authenticated properly. A custom updown script can be used to check for the environment variable XAUTH_FAILED . Such users can then be redirected, for example, using iptables DNAT, to a " walled garden " where they can contact the administrator or renew a paid subscription to the service. VPN clients use the modecfgdomain value and the DNS entries to redirect queries for the specified domain to these specified nameservers. This allows roaming users to access internal-only resources using the internal DNS names. Note while IKEv2 supports a comma-separated list of domain names and nameserver IP addresses using modecfgdomains and modecfgdns , the IKEv1 protocol only supports one domain name, and libreswan only supports up to two nameserver IP addresses. Optionally, to send a banner text to VPN cliens, use the modecfgbanner option. If leftsubnet is not 0.0.0.0/0 , split tunneling configuration requests are sent automatically to the client. For example, when using leftsubnet=10.0.0.0/8 , the VPN client would only send traffic for 10.0.0.0/8 through the VPN. On the client, the user has to input a user password, which depends on the backend used. For example: xauthby=file The administrator generated the password and stored it in the /etc/ipsec.d/passwd file. xauthby=pam The password is obtained at the location specified in the PAM configuration in the /etc/pam.d/pluto file. xauthby=alwaysok The password is not checked and always accepted. Use this option for testing purposes or if you want to ensure compatibility for xauth-only clients. Additional Resources For more information about XAUTH, see the Extended Authentication within ISAKMP/Oakley (XAUTH) Internet-Draft document. 4.6.9. Using the Protection against Quantum Computers Using IKEv1 with PreShared Keys provided protection against quantum attackers. The redesign of IKEv2 does not offer this protection natively. Libreswan offers the use of Postquantum Preshared Keys ( PPK ) to protect IKEv2 connections against quantum attacks. To enable optional PPK support, add ppk=yes to the connection definition. To require PPK, add ppk=insist . Then, each client can be given a PPK ID with a secret value that is communicated out-of-band (and preferably quantum safe). The PPK's should be very strong in randomness and not be based on dictionary words. The PPK ID and PPK data itself are stored in ipsec.secrets , for example: The PPKS option refers to static PPKs. There is an experimental function to use one-time-pad based Dynamic PPKs. Upon each connection, a new part of a onetime pad is used as the PPK. When used, that part of the dynamic PPK inside the file is overwritten with zeroes to prevent re-use. If there is no more one time pad material left, the connection fails. See the ipsec.secrets(5) man page for more information. Warning The implementation of dynamic PPKs is provided as a Technology Preview and this functionality should be used with caution. See the 7.5 Release Notes for more information. 4.6.10. Additional Resources The following sources of information provide additional resources regarding Libreswan and the ipsec daemon. 4.6.10.1. Installed Documentation ipsec(8) man page - Describes command options for ipsec . ipsec.conf(5) man page - Contains information on configuring ipsec . ipsec.secrets(5) man page - Describes the format of the ipsec.secrets file. ipsec_auto(8) man page - Describes the use of the auto command line client for manipulating Libreswan IPsec connections established using automatic exchanges of keys. ipsec_rsasigkey(8) man page - Describes the tool used to generate RSA signature keys. /usr/share/doc/libreswan- version / 4.6.10.2. Online Documentation https://libreswan.org The website of the upstream project. https://libreswan.org/wiki The Libreswan Project Wiki. https://libreswan.org/man/ All Libreswan man pages. NIST Special Publication 800-77: Guide to IPsec VPNs Practical guidance to organizations on implementing security services based on IPsec. | [
"~]# yum install libreswan",
"~]USD yum info libreswan",
"~]# systemctl stop ipsec ~]# rm /etc/ipsec.d/*db",
"~]# ipsec initnss Initializing NSS database",
"~]# certutil -N -d sql:/etc/ipsec.d Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password:",
"~]# systemctl start ipsec",
"~]USD systemctl status ipsec * ipsec.service - Internet Key Exchange (IKE) Protocol Daemon for IPsec Loaded: loaded (/usr/lib/systemd/system/ipsec.service; disabled; vendor preset: disabled) Active: active (running) since Sun 2018-03-18 18:44:43 EDT; 3s ago Docs: man:ipsec(8) man:pluto(8) man:ipsec.conf(5) Process: 20358 ExecStopPost=/usr/sbin/ipsec --stopnflog (code=exited, status=0/SUCCESS) Process: 20355 ExecStopPost=/sbin/ip xfrm state flush (code=exited, status=0/SUCCESS) Process: 20352 ExecStopPost=/sbin/ip xfrm policy flush (code=exited, status=0/SUCCESS) Process: 20347 ExecStop=/usr/libexec/ipsec/whack --shutdown (code=exited, status=0/SUCCESS) Process: 20634 ExecStartPre=/usr/sbin/ipsec --checknflog (code=exited, status=0/SUCCESS) Process: 20631 ExecStartPre=/usr/sbin/ipsec --checknss (code=exited, status=0/SUCCESS) Process: 20369 ExecStartPre=/usr/libexec/ipsec/_stackmanager start (code=exited, status=0/SUCCESS) Process: 20366 ExecStartPre=/usr/libexec/ipsec/addconn --config /etc/ipsec.conf --checkconfig (code=exited, status=0/SUCCESS) Main PID: 20646 (pluto) Status: \"Startup completed.\" CGroup: /system.slice/ipsec.service └─20646 /usr/libexec/ipsec/pluto --leak-detective --config /etc/ipsec.conf --nofork",
"~]# systemctl enable ipsec",
"~]# ipsec newhostkey --output /etc/ipsec.d/hostkey.secrets Generated RSA key pair with CKAID 14936e48e756eb107fa1438e25a345b46d80433f was stored in the NSS database",
"~]# ipsec showhostkey --left --ckaid 14936e48e756eb107fa1438e25a345b46d80433f # rsakey AQPFKElpV leftrsasigkey=0sAQPFKElpV2GdCF0Ux9Kqhcap53Kaa+uCgduoT2I3x6LkRK8N+GiVGkRH4Xg+WMrzRb94kDDD8m/BO/Md+A30u0NjDk724jWuUU215rnpwvbdAob8pxYc4ReSgjQ/DkqQvsemoeF4kimMU1OBPNU7lBw4hTBFzu+iVUYMELwQSXpremLXHBNIamUbe5R1+ibgxO19l/PAbZwxyGX/ueBMBvSQ+H0UqdGKbq7UgSEQTFa4/gqdYZDDzx55tpZk2Z3es+EWdURwJOgGiiiIFuBagasHFpeu9Teb1VzRyytnyNiJCBVhWVqsB4h6eaQ9RpAMmqBdBeNHfXwb6/hg+JIKJgjidXvGtgWBYNDpG40fEFh9USaFlSdiHO+dmGyZQ74Rg9sWLtiVdlH1YEBUtQb8f8FVry9wSn6AZqPlpGgUdtkTYUCaaifsYH4hoIA0nku4Fy/Ugej89ZdrSN7Lt+igns4FysMmBOl9Wi9+LWnfl+dm4Nc6UNgLE8kZc+8vMJGkLi4SYjk2/MFYgqGX/COxSCPBFUZFiNK7Wda0kWea/FqE1heem7rvKAPIiqMymjSmytZI9hhkCD16pCdgrO3fJXsfAUChYYSPyPQClkavvBL/wNK9zlaOwssTaKTj4Xn90SrZaxTEjpqUeQ==",
"~]# ipsec showhostkey --list < 1 > RSA keyid: AQPFKElpV ckaid: 14936e48e756eb107fa1438e25a345b46d80433f",
"conn mytunnel [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig # load and initiate automatically auto=start",
"~]# systemctl restart ipsec",
"~]# ipsec auto --add mytunnel ~]# ipsec auto --up mytunnel",
"~]# tcpdump -n -i interface esp or udp port 500 or udp port 4500 00:32:32.632165 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1a), length 132 00:32:32.632592 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1a), length 132 00:32:32.632592 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 7, length 64 00:32:33.632221 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1b), length 132 00:32:33.632731 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1b), length 132 00:32:33.632731 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 8, length 64 00:32:34.632183 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1c), length 132 00:32:34.632607 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1c), length 132 00:32:34.632607 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 9, length 64 00:32:35.632233 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1d), length 132 00:32:35.632685 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1d), length 132 00:32:35.632685 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 10, length 64",
"~]# ipsec whack --trafficstatus 006 #2: \"mytunnel\", type=ESP, add_time=1234567890, inBytes=336, outBytes=336, id='@east'",
"conn mysubnet also=mytunnel leftsubnet=192.0.1.0/24 rightsubnet=192.0.2.0/24 auto=start conn mysubnet6 also=mytunnel connaddrfamily=ipv6 leftsubnet=2001:db8:0:1::/64 rightsubnet=2001:db8:0:2::/64 auto=start conn mytunnel [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig",
"~]# ipsec auto --add mysubnet",
"~]# ipsec auto --add mysubnet6",
"~]# ipsec auto --up mysubnet 104 \"mysubnet\" #1: STATE_MAIN_I1: initiate 003 \"mysubnet\" #1: received Vendor ID payload [Dead Peer Detection] 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 106 \"mysubnet\" #1: STATE_MAIN_I2: sent MI2, expecting MR2 108 \"mysubnet\" #1: STATE_MAIN_I3: sent MI3, expecting MR3 003 \"mysubnet\" #1: received Vendor ID payload [CAN-IKEv2] 004 \"mysubnet\" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_RSA_SIG cipher=aes_128 prf=oakley_sha group=modp2048} 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x9414a615 <0x1a8eb4ef xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"~]# ipsec auto --up mysubnet6 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x06fe2099 <0x75eaa862 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"conn mysubnet [email protected] leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== left=192.1.2.23 leftsourceip=192.0.1.254 leftsubnet=192.0.1.0/24 [email protected] rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== right=192.1.2.45 rightsourceip=192.0.2.254 rightsubnet=192.0.2.0/24 auto=start authby=rsasig",
"conn branch1 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=5.6.7.8 rightid=@branch1 rightsubnet=10.0.1.0/24 rightrsasigkey=0sAXXXX[...] # auto=start authby=rsasig conn branch2 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=10.11.12.13 rightid=@branch2 rightsubnet=10.0.2.0/24 rightrsasigkey=0sAYYYY[...] # auto=start authby=rsasig",
"conn branch1 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=10.11.12.13 rightid=@branch2 rightsubnet=10.0.1.0/24 rightrsasigkey=0sAYYYY[...] # auto=start authby=rsasig conn passthrough left=1.2.3.4 right=0.0.0.0 leftsubnet=10.0.1.0/24 rightsubnet=10.0.1.0/24 authby=never type=passthrough auto=route",
"conn roadwarriors ikev2=insist # Support (roaming) MOBIKE clients (RFC 4555) mobike=yes fragmentation=yes left=1.2.3.4 # if access to the LAN is given, enable this, otherwise use 0.0.0.0/0 # leftsubnet=10.10.0.0/16 leftsubnet=0.0.0.0/0 leftcert=vpn-server.example.com leftid=%fromcert leftxauthserver=yes leftmodecfgserver=yes right=%any # trust our own Certificate Agency rightca=%same # pick an IP address pool to assign to remote users # 100.64.0.0/16 prevents RFC1918 clashes when remote users are behind NAT rightaddresspool=100.64.13.100-100.64.13.254 # if you want remote clients to use some local DNS zones and servers modecfgdns=\"1.2.3.4, 5.6.7.8\" modecfgdomains=\"internal.company.com, corp\" rightxauthclient=yes rightmodecfgclient=yes authby=rsasig # optionally, run the client X.509 ID through pam to allow/deny client # pam-authorize=yes # load connection, don't initiate auto=add # kill vanished roadwarriors dpddelay=1m dpdtimeout=5m dpdaction=%clear",
"conn to-vpn-server ikev2=insist # pick up our dynamic IP left=%defaultroute leftsubnet=0.0.0.0/0 leftcert=myname.example.com leftid=%fromcert leftmodecfgclient=yes # right can also be a DNS hostname right=1.2.3.4 # if access to the remote LAN is required, enable this, otherwise use 0.0.0.0/0 # rightsubnet=10.10.0.0/16 rightsubnet=0.0.0.0/0 # trust our own Certificate Agency rightca=%same authby=rsasig # allow narrowing to the server's suggested assigned IP and remote subnet narrowing=yes # Support (roaming) MOBIKE clients (RFC 4555) mobike=yes # Initiate connection auto=start",
"conn xauth-rsa ikev2=never auto=add authby=rsasig pfs=no rekey=no left=ServerIP leftcert=vpn.example.com #leftid=%fromcert leftid=vpn.example.com leftsendcert=always leftsubnet=0.0.0.0/0 rightaddresspool=10.234.123.2-10.234.123.254 right=%any rightrsasigkey=%cert modecfgdns=\"1.2.3.4,8.8.8.8\" modecfgdomains=example.com modecfgbanner=\"Authorized access is allowed\" leftxauthserver=yes rightxauthclient=yes leftmodecfgserver=yes rightmodecfgclient=yes modecfgpull=yes xauthby=pam dpddelay=30 dpdtimeout=120 dpdaction=clear ike_frag=yes # for walled-garden on xauth failure # xauthfail=soft # leftupdown=/custom/_updown",
"@west @east : PPKS \"user1\" \"thestringismeanttobearandomstr\""
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Securing_Virtual_Private_Networks |
Chapter 7. opm CLI | Chapter 7. opm CLI 7.1. About opm The opm CLI tool is provided by the Operator Framework for use with the Operator Bundle Format. This tool allows you to create and maintain catalogs of Operators from a list of bundles, called an index , that are similar to software repositories. The result is a container image, called an index image , which can be stored in a container registry and then installed on a cluster. An index contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can use the index image as a catalog by referencing it in a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. Additional resources See Operator Framework packaging formats for more information about the Bundle Format. To create a bundle image using the Operator SDK, see Working with bundle images . 7.2. Installing opm You can install the opm CLI tool on your Linux, macOS, or Windows workstation. Prerequisites For Linux, you must provide the following packages. RHEL 8 meets these requirements: podman version 1.9.3+ (version 2.0+ recommended) glibc version 2.28+ Procedure Navigate to the OpenShift mirror site and download the latest version of the tarball that matches your operating system. Unpack the archive. For Linux or macOS: USD tar xvf <file> For Windows, unzip the archive with a ZIP program. Place the file anywhere in your PATH . For Linux or macOS: Check your PATH : USD echo USDPATH Move the file. For example: USD sudo mv ./opm /usr/local/bin/ For Windows: Check your PATH : C:\> path Move the file: C:\> move opm.exe <directory> Verification After you install the opm CLI, verify that it is available: USD opm version Example output Version: version.Version{OpmVersion:"v1.15.4-2-g6183dbb3", GitCommit:"6183dbb3567397e759f25752011834f86f47a3ea", BuildDate:"2021-02-13T04:16:08Z", GoOs:"linux", GoArch:"amd64"} 7.3. Additional resources See Managing custom catalogs for opm procedures including creating, updating, and pruning index images. | [
"tar xvf <file>",
"echo USDPATH",
"sudo mv ./opm /usr/local/bin/",
"C:\\> path",
"C:\\> move opm.exe <directory>",
"opm version",
"Version: version.Version{OpmVersion:\"v1.15.4-2-g6183dbb3\", GitCommit:\"6183dbb3567397e759f25752011834f86f47a3ea\", BuildDate:\"2021-02-13T04:16:08Z\", GoOs:\"linux\", GoArch:\"amd64\"}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cli_tools/opm-cli |
Chapter 36. Compiler and Tools | Chapter 36. Compiler and Tools java-1.8.0-openjdk component, BZ# 1189530 With Red Hat Enterprise Linux 7.1, the java-1.8.0-openjdk packages do not provide "java" in the RPM metadata, which breaks compatibility with packages that require Java and are available from the Enterprise Application Platform (EAP) channel. To work around this problem, install another package that provides "java" in the RPM metadata before installing java-1.8.0-openjdk . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/known-issues-compiler |
Chapter 25. Apache HTTP Secure Server Configuration | Chapter 25. Apache HTTP Secure Server Configuration 25.1. Introduction This chapter provides basic information on the Apache HTTP Server with the mod_ssl security module enabled to use the OpenSSL library and toolkit. The combination of these three components are referred to in this chapter as the secure Web server or just as the secure server. The mod_ssl module is a security module for the Apache HTTP Server. The mod_ssl module uses the tools provided by the OpenSSL Project to add a very important feature to the Apache HTTP Server - the ability to encrypt communications. In contrast, regular HTTP communications between a browser and a Web server are sent in plain text, which could be intercepted and read by someone along the route between the browser and the server. This chapter is not meant to be complete and exclusive documentation for any of these programs. When possible, this guide points to appropriate places where you can find more in-depth documentation on particular subjects. This chapter shows you how to install these programs. You can also learn the steps necessary to generate a private key and a certificate request, how to generate your own self-signed certificate, and how to install a certificate to use with your secure server. The mod_ssl configuration file is located at /etc/httpd/conf.d/ssl.conf . For this file to be loaded, and hence for mod_ssl to work, you must have the statement Include conf.d/*.conf in the /etc/httpd/conf/httpd.conf file. This statement is included by default in the default Apache HTTP Server configuration file. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/apache_http_secure_server_configuration |
AMQ Clients 2.10 Release Notes | AMQ Clients 2.10 Release Notes Red Hat AMQ 2021.Q3 Release Notes for Red Hat AMQ Clients | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_clients_2.10_release_notes/index |
Chapter 26. Automating group membership using IdM CLI | Chapter 26. Automating group membership using IdM CLI Using automatic group membership allows you to assign users and hosts to groups automatically based on their attributes. For example, you can: Divide employees' user entries into groups based on the employees' manager, location, or any other attribute. Divide hosts based on their class, location, or any other attribute. Add all users or all hosts to a single global group. This chapter covers the following topics: Benefits of automatic group membership Automember rules Adding an automember rule using IdM CLI Adding a condition to an automember rule using IdM CLI Viewing existing automember rules using IdM CLI Deleting an automember rule using IdM CLI Removing a condition from an automember rule using IdM CLI Applying automember rules to existing entries using IdM CLI Configuring a default automember group using IdM CLI 26.1. Benefits of automatic group membership Using automatic membership for users allows you to: Reduce the overhead of manually managing group memberships You no longer have to assign every user and host to groups manually. Improve consistency in user and host management Users and hosts are assigned to groups based on strictly defined and automatically evaluated criteria. Simplify the management of group-based settings Various settings are defined for groups and then applied to individual group members, for example sudo rules, automount, or access control. Adding users and hosts to groups automatically makes managing these settings easier. 26.2. Automember rules When configuring automatic group membership, the administrator defines automember rules. An automember rule applies to a specific user or host target group. It cannot apply to more than one group at a time. After creating a rule, the administrator adds conditions to it. These specify which users or hosts get included or excluded from the target group: Inclusive conditions When a user or host entry meets an inclusive condition, it will be included in the target group. Exclusive conditions When a user or host entry meets an exclusive condition, it will not be included in the target group. The conditions are specified as regular expressions in the Perl-compatible regular expressions (PCRE) format. For more information about PCRE, see the pcresyntax(3) man page on your system. Note IdM evaluates exclusive conditions before inclusive conditions. In case of a conflict, exclusive conditions take precedence over inclusive conditions. An automember rule applies to every entry created in the future. These entries will be automatically added to the specified target group. If an entry meets the conditions specified in multiple automember rules, it will be added to all the corresponding groups. Existing entries are not affected by the new rule. If you want to change existing entries, see Applying automember rules to existing entries using IdM CLI . 26.3. Adding an automember rule using IdM CLI Follow this procedure to add an automember rule using the IdM CLI. For information about automember rules, see Automember rules . After adding an automember rule, you can add conditions to it using the procedure described in Adding a condition to an automember rule . Note Existing entries are not affected by the new rule. If you want to change existing entries, see Applying automember rules to existing entries using IdM CLI . Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . The target group of the new rule must exist in IdM. Procedure Enter the ipa automember-add command to add an automember rule. When prompted, specify: Automember rule . This is the target group name. Grouping Type . This specifies whether the rule targets a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . For example, to add an automember rule for a user group named user_group : Verification You can display existing automember rules and conditions in IdM using Viewing existing automember rules using IdM CLI . 26.4. Adding a condition to an automember rule using IdM CLI After configuring automember rules, you can then add a condition to that automember rule using the IdM CLI. For information about automember rules, see Automember rules . Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . The target rule must exist in IdM. For details, see Adding an automember rule using IdM CLI . Procedure Define one or more inclusive or exclusive conditions using the ipa automember-add-condition command. When prompted, specify: Automember rule . This is the target rule name. See Automember rules for details. Attribute Key . This specifies the entry attribute to which the filter will apply. For example, uid for users. Grouping Type . This specifies whether the rule targets a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . Inclusive regex and Exclusive regex . These specify one or more conditions as regular expressions. If you only want to specify one condition, press Enter when prompted for the other. For example, the following condition targets all users with any value (.*) in their user login attribute ( uid ). As another example, you can use an automembership rule to target all Windows users synchronized from Active Directory (AD). To achieve this, create a condition that that targets all users with ntUser in their objectClass attribute, which is shared by all AD users: Verification You can display existing automember rules and conditions in IdM using Viewing existing automember rules using IdM CLI . 26.5. Viewing existing automember rules using IdM CLI Follow this procedure to view existing automember rules using the IdM CLI. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Enter the ipa automember-find command. When prompted, specify the Grouping type : To target a user group, enter group . To target a host group, enter hostgroup . For example: 26.6. Deleting an automember rule using IdM CLI Follow this procedure to delete an automember rule using the IdM CLI. Deleting an automember rule also deletes all conditions associated with the rule. To remove only specific conditions from a rule, see Removing a condition from an automember rule using IdM CLI . Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Enter the ipa automember-del command. When prompted, specify: Automember rule . This is the rule you want to delete. Grouping rule . This specifies whether the rule you want to delete is for a user group or a host group. Enter group or hostgroup . 26.7. Removing a condition from an automember rule using IdM CLI Follow this procedure to remove a specific condition from an automember rule. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Enter the ipa automember-remove-condition command. When prompted, specify: Automember rule . This is the name of the rule from which you want to remove a condition. Attribute Key . This is the target entry attribute. For example, uid for users. Grouping Type . This specifies whether the condition you want to delete is for a user group or a host group. Enter group or hostgroup . Inclusive regex and Exclusive regex . These specify the conditions you want to remove. If you only want to specify one condition, press Enter when prompted for the other. For example: 26.8. Applying automember rules to existing entries using IdM CLI Automember rules apply automatically to user and host entries created after the rules were added. They are not applied retroactively to entries that existed before the rules were added. To apply automember rules to previously added entries, you have to manually rebuild automatic membership. Rebuilding automatic membership re-evaluates all existing automember rules and applies them either to all user or hosts entries, or to specific entries. Note Rebuilding automatic membership does not remove user or host entries from groups, even if the entries no longer match the group's inclusive conditions. To remove them manually, see Removing a member from a user group using IdM CLI or Removing IdM host group members using the CLI . Prerequisites You must be logged in as the administrator. For details, see link: Using kinit to log in to IdM manually . Procedure To rebuild automatic membership, enter the ipa automember-rebuild command. Use the following options to specify the entries to target: To rebuild automatic membership for all users, use the --type=group option: To rebuild automatic membership for all hosts, use the --type=hostgroup option. To rebuild automatic membership for a specified user or users, use the --users= target_user option: To rebuild automatic membership for a specified host or hosts, use the --hosts= client.idm.example.com option. 26.9. Configuring a default automember group using IdM CLI When you configure a default automember group, new user or host entries that do not match any automember rule are automatically added to this default group. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . The target group you want to set as default exists in IdM. Procedure Enter the ipa automember-default-group-set command to configure a default automember group. When prompted, specify: Default (fallback) Group , which specifies the target group name. Grouping Type , which specifies whether the target is a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . For example: Note To remove the current default automember group, enter the ipa automember-default-group-remove command. Verification To verify that the group is set correctly, enter the ipa automember-default-group-show command. The command displays the current default automember group. For example: | [
"ipa automember-add Automember Rule: user_group Grouping Type: group -------------------------------- Added automember rule \"user_group\" -------------------------------- Automember Rule: user_group",
"ipa automember-add-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ---------------------------------- Added condition(s) to \"user_group\" ---------------------------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add-condition Automember Rule: ad_users Attribute Key: objectclass Grouping Type: group [Inclusive Regex]: ntUser [Exclusive Regex]: ------------------------------------- Added condition(s) to \"ad_users\" ------------------------------------- Automember Rule: ad_users Inclusive Regex: objectclass=ntUser ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-find Grouping Type: group --------------- 1 rules matched --------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of entries returned 1 ----------------------------",
"ipa automember-remove-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ----------------------------------- Removed condition(s) from \"user_group\" ----------------------------------- Automember Rule: user_group ------------------------------ Number of conditions removed 1 ------------------------------",
"ipa automember-rebuild --type=group -------------------------------------------------------- Automember rebuild task finished. Processed (9) entries. --------------------------------------------------------",
"ipa automember-rebuild --users=target_user1 --users=target_user2 -------------------------------------------------------- Automember rebuild task finished. Processed (2) entries. --------------------------------------------------------",
"ipa automember-default-group-set Default (fallback) Group: default_user_group Grouping Type: group --------------------------------------------------- Set default (fallback) group for automember \"default_user_group\" --------------------------------------------------- Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"ipa automember-default-group-show Grouping Type: group Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/automating-group-membership-using-idm-cli_managing-users-groups-hosts |
10.5. Statistical Information | 10.5. Statistical Information FS-Cache also keeps track of general statistical information. To view this information, use: FS-Cache statistics includes information on decision points and object counters. For more information, see the following kernel document: /usr/share/doc/kernel-doc- version /Documentation/filesystems/caching/fscache.txt | [
"cat /proc/fs/fscache/stats"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/fscachestats |
15.3. Managing Users and Groups for a CA, OCSP, KRA, or TKS | 15.3. Managing Users and Groups for a CA, OCSP, KRA, or TKS Many of the operations that users can perform are dictated by the groups that they belong to; for instance, agents for the CA manage certificates and profiles, while administrators manage CA server configuration. Four subsystems - the CA, OCSP, KRA, and TKS - use the Java administrative console to manage groups and users. The TPS has web-based admin services, and users and groups are configured through its web service page. 15.3.1. Managing Groups Note pkiconsole is being deprecated. 15.3.1.1. Creating a New Group Log into the administrative console. Select Users and Groups from the navigation menu on the left. Select the Groups tab. Click Edit , and fill in the group information. It is only possible to add users who already exist in the internal database. Edit the ACLs to grant the group privileges. See Section 15.5.4, "Editing ACLs" for more information. If no ACIs are added to the ACLs for the group, the group will have no access permissions to any part of Certificate System. 15.3.1.2. Changing Members in a Group Members can be added or deleted from all groups. The group for administrators must have at least one user entry. Log into the administrative console. Select Users and Groups from the navigation tree on the left. Click the Groups tab. Select the group from the list of names, and click Edit . Make the appropriate changes. To change the group description, type a new description in the Group description field. To remove a user from the group, select the user, and click Delete . To add users, click Add User . Select the users to add from the dialog box, and click OK . 15.3.2. Managing Users (Administrators, Agents, and Auditors) The users for each subsystem are maintained separately. Just because a person is an administrator in one subsystem does not mean that person has any rights (or even a user entry) for another subsystem. Users can be configured and, with their user certificates, trusted as agents, administrators, or auditors for a subsystem. 15.3.2.1. Creating Users After you installed Certificate System, only the user created during the setup exists. This section describes how to create additional users. Note For security reasons, create individual accounts for Certificate System users. 15.3.2.1.1. Creating Users Using the Command Line To create a user using the command line: Add a user account. For example, to add the example user to the CA: This command uses the caadmin user to add a new account. Optionally, add a user to a group. For example, to add the example user to the Certificate Manager Agents group: Create a certificate request: If a Key Recovery Authority (KRA) exists in your Certificate System environment: This command stores the Certificate Signing Request (CSR) in the CRMF format in the ~/user_name.req file. If no Key Recovery Authority (KRA) exists in your Certificate System environment: Create a NSS database directory: Store the CSR in a PKCS-#10 formatted file specified by the -o option, -d for the path to an initialized NSS database directory, -P option for a password file, -p for a password, and -n for a subject DN: Create an enrollment request: Create the ~/cmc.role_crmf.cfg file with the following content: Set the parameters based on your environment and the CSR format used in the step. Pass the previously created configuration file to the CMCRequest utility to create the CMC request: Submit a Certificate Management over CMS (CMC) request: Create the ~/HttpClient_role_crmf.cfg file with the following content: Set the parameters based on your environment. Submit the request to the CA: Verify the result: Optionally, to import the certificate as the user to its own ~/.dogtag/pki-instance_name/ database: Add the certificate to the user record: List certificates issued for the user to discover the certificate's serial number. For example, to list certificates that contain the example user name in the certificate's subject: The serial number of the certificate is required in the step. Add the certificate using its serial number from the certificate repository to the user account in the Certificate System database. For example, for a CA user: 15.3.2.1.2. Creating Users Using the Console Note pkiconsole is being deprecated. To create a user using the PKI Console: Log into the administrative console. In the Configuration tab, select Users and Groups . Click Add . Fill in the information in the Edit User Information dialog. Most of the information is standard user information, such as the user's name, email address, and password. This window also contains a field called User State , which can contain any string, which is used to add additional information about the user; most basically, this field can show whether this is an active user. Select the group to which the user will belong. The user's group membership determines what privileges the user has. Assign agents, administrators, and auditors to the appropriate subsystem group. Store the user's certificate. Request a user certificate through the CA end-entities service page. If auto-enrollment is not configured for the user profile, then approve the certificate request. Retrieve the certificate using the URL provided in the notification email, and copy the base-64 encoded certificate to a local file or to the clipboard. Select the new user entry, and click Certificates . Click Import , and paste in the base-64 encoded certificate. 15.3.2.2. Changing a Certificate System User's Certificate Log into the administrative console. Select Users and Groups . Select the user to edit from the list of user IDs, and click Certificates . Click Import to add the new certificate. In the Import Certificate window, paste the new certificate in the text area. Include the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- marker lines. 15.3.2.3. Renewing Administrator, Agent, and Auditor User Certificates There are two methods of renewing a certificate. Regenerating the certificate takes its original key and its original profile and request, and recreates an identical key with a new validity period and expiration date. Re-keying a certificate resubmits the initial certificate request to the original profile, but generates a new key pair. Administrator certificates can be renewed by being re-keyed. Each subsystem has a bootstrap user that was created at the time the subsystem was created. A new certificate can be requested for this user before their original one expires, using one of the default renewal profiles. Certificates for administrative users can be renewed directly in the end user enrollment forms, using the serial number of the original certificate. Renew the admin user certificates in the CA's end users forms, as described in Section 5.4.1.1.2, "Certificate-Based Renewal" . This must be the same CA as first issued the certificate (or a clone of it). Agent certificates can be renewed by using the certificate-based renewal form in the end entities page. Self-renew user SSL client certificate . This form recognizes and updates the certificate stored in the browser's certificate store directly. Note It is also possible to renew the certificate using certutil , as described in Section 17.3.3, "Renewing Certificates Using certutil" . Rather than using the certificate stored in a browser to initiate renewal, certutil uses an input file with the original key. Add the renewed user certificate to the user entry in the internal LDAP database. Open the console for the subsystem. Configuration | Users and Groups | Users | admin | Certificates | Import In the Configuration tab, select Users and Groups . In the Users tab, double-click the user entry with the renewed certificate, and click Certificates . Click Import , and paste in the base-64 encoded certificate. Note pkiconsole is being deprecated. This can also be done by using ldapmodify to add the renewed certification directly to the user entry in the internal LDAP database, by replacing the userCertificate attribute in the user entry, such as uid=admin,ou=people,dc= subsystem-base-DN . 15.3.2.4. Renewing an Expired Administrator, Agent, and Auditor User Certificate When a valid user certificate has already expired, you can no longer use the web service page nor the pki command-line tool requiring authentication. In such a scenario, you can use the pki-server cert-fix command to renew an expired certificate. Before you proceed, make sure: You have a valid CA certificate. You have root privileges. Procedure 15.1. Renewing an Expired Administrator, Agent, and Auditor User Certificate Disable self test. Either run the following command: Or remove the following line from CA's CS.cfg file and restart the CA subsystem: Check the expired certificates in the client's NSS database and find the certificate's serial number (certificate ID). List the user certificates: Get the expired certificate serial number, which you want to renew: Renew the certificate. The local LDAP server requires the LDAP Directory Manager's password. Re-eanable self test. Either run the following command: Or add the following line to CA's CS.cfg file and restart the CA subsystem: To verify that you have succeeded in the certificate renewal, you can display sufficient information about the certificate by running: To see full details of the specific certificate including attributes, extensions, public key modulus, hashes, and more, you can also run: 15.3.2.5. Deleting a Certificate System User Users can be deleted from the internal database. Deleting a user from the internal database deletes that user from all groups to which the user belongs. To remove the user from specific groups, modify the group membership. Delete a privileged user from the internal database by doing the following: Log into the administrative console. Select Users and Groups from the navigation menu on the left. Select the user from the list of user IDs, and click Delete . Confirm the delete when prompted. | [
"pkiconsole https://server.example.com:8443/ subsystem_type",
"pki -d ~/.dogtag/pki-instance_name/ca/alias/ -c password -n caadmin ca -user-add example --fullName \" Example User \" --------------------- Added user \"example\" --------------------- User ID: example Full name: Example User",
"pki -d ~/.dogtag/pki-instance_name/ -p password -n \" caadmin \" user-add-membership example Certificate Manager Agents",
"CRMFPopClient -d ~/.dogtag/pki-instance_name/ -p password -n \" user_name \" -q POP_SUCCESS -b kra.transport -w \"AES/CBC/PKCS5Padding\" -v -o ~/user_name.req",
"export pkiinstance=ca1 # echo USD{pkiinstance} # export agentdir=~/.dogtag/USD{pkiinstance}/agent1.dir # echo USD{agentdir} # pki -d USD{agentdir}/ -C USD{ somepwdfile } client-init",
"PKCS10Client -d USD{agentdir}/ -P USD{ somepwdfile } -n \"cn=agent1,uid=agent1\" -o USD{agentdir}/agent1.csr PKCS10Client: Certificate request written into /.dogtag/ca1/agent1.dir/agent1.csr PKCS10Client: PKCS#10 request key id written into /.dogtag/ca1/agent1.dir/agent1.csr.keyId",
"#numRequests: Total number of PKCS10 requests or CRMF requests. numRequests=1 #input: full path for the PKCS10 request or CRMF request, #the content must be in Base-64 encoded format #Multiple files are supported. They must be separated by space. input= ~/user_name.req #output: full path for the CMC request in binary format output= ~/cmc.role_crmf.req #tokenname: name of token where agent signing cert can be found (default is internal) tokenname=internal #nickname: nickname for agent certificate which will be used #to sign the CMC full request. nickname= PKI Administrator for Example.com #dbdir: directory for cert9.db, key4.db and pkcs11.txt dbdir= ~/.dogtag/pki-instance_name/ #password: password for cert9.db which stores the agent #certificate password= password #format: request format, either pkcs10 or crmf format= crmf",
"CMCRequest ~/cmc.role_crmf.cfg",
"#host: host name for the http server host= server.example.com #port: port number port= 8443 #secure: true for secure connection, false for nonsecure connection secure=true #input: full path for the enrollment request, the content must be in binary format input= ~/cmc.role_crmf.req #output: full path for the response in binary format output= ~/cmc.role_crmf.resp #tokenname: name of token where SSL client authentication cert can be found (default is internal) #This parameter will be ignored if secure=false tokenname=internal #dbdir: directory for cert9.db, key4.db and pkcs11.txt #This parameter will be ignored if secure=false dbdir= ~/.dogtag/pki-instance_name/ #clientmode: true for client authentication, false for no client authentication #This parameter will be ignored if secure=false clientmode=true #password: password for cert9.db #This parameter will be ignored if secure=false and clientauth=false password= password #nickname: nickname for client certificate #This parameter will be ignored if clientmode=false nickname= PKI Administrator for Example.com #servlet: servlet name servlet=/ca/ee/ca/profileSubmitCMCFull",
"HttpClient ~/HttpClient_role_crmf.cfg Total number of bytes read = 3776 after SSLSocket created, thread token is Internal Key Storage Token client cert is not null handshake happened writing to socket Total number of bytes read = 2523 MIIJ1wYJKoZIhvcNAQcCoIIJyDCCCcQCAQMxDzANBglghkgBZQMEAgEFADAxBggr The response in data format is stored in ~/cmc.role_crmf.resp",
"CMCResponse ~/cmc.role_crmf.resp Certificates: Certificate: Data: Version: v3 Serial Number: 0xE Signature Algorithm: SHA256withRSA - 1.2.840.113549.1.1.11 Issuer: CN=CA Signing Certificate,OU=pki- instance_name Security Domain Validity: Not Before: Friday, July 21, 2017 12:06:50 PM PDT America/Los_Angeles Not After: Wednesday, January 17, 2018 12:06:50 PM PST America/Los_Angeles Subject: CN= user_name Number of controls is 1 Control #0: CMCStatusInfoV2 OID: {1 3 6 1 5 5 7 7 25} BodyList: 1 Status: SUCCESS",
"certutil -d ~/.dogtag/pki-instance_name/ -A -t \"u,u,u\" -n \" user_name certificate \" -i ~/cmc.role_crmf.resp",
"pki -d ~/.dogtag/pki-instance_name/ -c password -n caadmin ca-user-cert-find example ----------------- 1 entries matched ----------------- Cert ID: 2;6;CN=CA Signing Certificate,O=EXAMPLE;CN=PKI Administrator,E= example @example.com,O=EXAMPLE Version: 2 Serial Number: 0x6 Issuer: CN=CA Signing Certificate,O=EXAMPLE Subject: CN=PKI Administrator,E= example @example.com,O=EXAMPLE ---------------------------- Number of entries returned 1",
"pki -c password -n caadmin ca -user-cert-add example --serial 0x6",
"pkiconsole https://server.example.com:8443/ subsystem_type",
"pkiconsole https://server.example.com: admin_port/subsystem_type",
"pki-server selftest-disable -i PKI_instance",
"selftests.container.order.startup=CAPresence:critical, SystemCertsVerification:critical",
"certutil -L -d /root/nssdb/",
"certutil -L -d /root/nssdb/ -n Expired_cert | grep Serial Serial Number: 16 (0x10)",
"pki-server cert-fix --ldap-url ldap:// host 389 --agent-uid caadmin -i PKI_instance -p PKI_https_port --extra-cert 16",
"pki-server selftest-enable -i PKI_instance",
"selftests.container.order.startup=CAPresence:critical, SystemCertsVerification:critical",
"pki ca-cert-find",
"pki ca-cert-show 16 --pretty"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Creating_a_New_Group |
Chapter 6. Viewing AMQ Brokers | Chapter 6. Viewing AMQ Brokers You can configure the Fuse Console to view all AMQ brokers that are deployed on the OpenShift cluster. Prerequisites Each AMQ broker image (that you want to view in the Fuse Console) must be: Installed on the same OpenShift cluster that the Fuse Console is installed on. Configured so that the Fuse Console can recognize and connect to it, as described in the section on enabling the Artemis plugin in the Fuse Console in the AMQ Broker documentation. Procedure Click Artemis to view the AMQ management console and monitor the status of AMQ Broker. (The AMQ Broker is based on Apache ActiveMQ Artemis .) For information on using the AMQ management console, see Using AMQ Management Console in the Managing AMQ Broker guide. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_openshift/fuse-console-view-amq-brokers_fcopenshift |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/getting_started_with_streams_for_apache_kafka_on_openshift/proc-providing-feedback-on-redhat-documentation |
Appendix B. List of tickets by component | Appendix B. List of tickets by component Component Tickets 389-ds-base BZ#1740693 , BZ#1662461 , BZ#1732053 , BZ#1749236 , BZ#1639342 , BZ#1723545 , BZ#1724914, BZ#1739182 , BZ#1749595 , BZ#1756182 , BZ#1685059 anaconda BZ#1680606, BZ#1712987, BZ#1767612 ansible BZ#1410996 , BZ#1439896, BZ#1660838 bind BZ#1758317 corosync BZ#1413573 criu BZ#1400230 custodia BZ#1403214 desktop BZ#1509444, BZ#1481411 dnf BZ#1461652 fence-agents BZ#1476401 filesystems BZ#1666535, BZ#1274459, BZ#1111712, BZ#1206277, BZ#1477977 firewalld BZ#1738785 , BZ#1723610, BZ#1637675 , BZ#1713823 freeradius BZ#1463673 fwupd BZ#1623466 gnome-shell BZ#1720286 , BZ#1539772, BZ#1481395 hardware-enablement BZ#1660791, BZ#1062759, BZ#1384452, BZ#1519746, BZ#1454918, BZ#1454916 identity-management BZ#1405325 ipa BZ#1711172 , BZ#1544470 , BZ#1754494 , BZ#1733209 , BZ#1691939, BZ#1583950, BZ#1755223 , BZ#1115294 , BZ#1298286 , BZ#1518939 java-1.8.0-openjdk BZ#1498932, BZ#1746874 kernel-rt BZ#1708718 , BZ#1550584 kernel BZ#1801759 , BZ#1713642, BZ#1373519, BZ#1808458 , BZ#1724027, BZ#1708465, BZ#1722855, BZ#1772107, BZ#1737111, BZ#1770232, BZ#1787295, BZ#1807077, BZ#1559615, BZ#1230959, BZ#1460849, BZ#1464377, BZ#1457533, BZ#1503123, BZ#1589397, BZ#1726642 kexec-tools BZ#1723492 libcacard BZ#917867 libguestfs BZ#1387213 libreswan BZ#1375750 libteam BZ#1704451 libvirt BZ#1475770 lorax-composer BZ#1718473 lvm2 BZ#1642162 mariadb BZ#1731062 networking BZ#1712737, BZ#1712918, BZ#1722686, BZ#1698551, BZ#1729033, BZ#1700691, BZ#1711520 , BZ#1062656, BZ#916384, BZ#916382, BZ#755087, BZ#1259547, BZ#1393375 nss BZ#1431210 , BZ#1425514 , BZ#1432142 openscap BZ#1767826 ovmf BZ#653382 pacemaker BZ#1710422 , BZ#1781820 pcs BZ#1433016 perl-Socket BZ#1693293 pki-core BZ#1523330 python-blivet BZ#1632274 python-kdcproxy BZ#1746107 rear BZ#1693608 resource-agents BZ#1513957 rsyslog BZ#1309698 samba BZ#1724991 scap-security-guide BZ#1691336 , BZ#1755192 , BZ#1791583 , BZ#1726698 , BZ#1777862 selinux-policy BZ#1687497, BZ#1727379 , BZ#1651253 , BZ#1752577 services BZ#1749776 sos BZ#1704957 sssd BZ#1068725 storage BZ#1649493, BZ#1642968, BZ#1710533, BZ#1703180, BZ#1109348, BZ#1119909, BZ#1414957 systemd BZ#1284974 tools BZ#1569484 usbguard BZ#1480100 virtualization BZ#1607311, BZ#1746771, BZ#1751054 , BZ#1773478 , BZ#1103193, BZ#1348508, BZ#1299662 , BZ#1661654 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.8_release_notes/list_of_tickets_by_component |
Preface | Preface This guide will help you to install and configure 3scale | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/installing_red_hat_3scale_api_management/pr01 |
Chapter 5. Build and run microservices applications on the OpenShift image for JBoss EAP XP | Chapter 5. Build and run microservices applications on the OpenShift image for JBoss EAP XP You can build and run your microservices applications on the OpenShift image for JBoss EAP XP. Note JBoss EAP XP is supported only on OpenShift 4 and later versions. Use the following workflow to build and run a microservices application on the OpenShift image for JBoss EAP XP by using the source-to-image (S2I) process. Note The OpenShift images for JBoss EAP XP 4.0.0 provide a default standalone configuration file, which is based on the standalone-microprofile-ha.xml file. For more information about the server configuration files included in JBoss EAP XP, see the Standalone server configuration files section. This workflow uses the microprofile-config quickstart as an example. The quickstart provides a small, specific working example that can be used as a reference for your own project. See the microprofile-config quickstart that ships with JBoss EAP XP 4.0.0 for more information. Additional resources For more information about the server configuration files included in JBoss EAP XP, see Standalone server configuration files . 5.1. Preparing OpenShift for application deployment Prepare OpenShift for application deployment. Prerequisites You have installed an operational OpenShift instance. For more information, see the Installing and Configuring OpenShift Container Platform Clusters book on Red Hat Customer Portal . Procedure Log in to your OpenShift instance using the oc login command. Create a new project in OpenShift. A project allows a group of users to organize and manage content separately from other groups. You can create a project in OpenShift using the following command. For example, for the microprofile-config quickstart, create a new project named eap-demo using the following command. 5.2. Configuring authentication to the Red Hat Container Registry Before you can import and use the OpenShift image for JBoss EAP XP, you must configure authentication to the Red Hat Container Registry. Create an authentication token using a registry service account to configure access to the Red Hat Container Registry. You need not use or store your Red Hat account's username and password in your OpenShift configuration when you use an authentication token. Procedure Follow the instructions on Red Hat Customer Portal to create an authentication token using a Registry Service Account management application . Download the YAML file containing the OpenShift secret for the token. You can download the YAML file from the OpenShift Secret tab on your token's Token Information page. Create the authentication token secret for your OpenShift project using the YAML file that you downloaded: Configure the secret for your OpenShift project using the following commands, replacing the secret name below with the name of your secret created in the step. Additional resources Configuring authentication to the Red Hat Container Registry Registry Service Account management application Configuring access to secured registries 5.3. Importing the latest OpenShift imagestreams and templates for JBoss EAP XP Import the latest OpenShift imagestreams and templates for JBoss EAP XP. Important OpenJDK 8 images and imagestreams on OpenShift are deprecated. The images and imagestreams are still supported on OpenShift. However, no enhancements are made to these images and imagestreams and they might be removed in the future. Red Hat continues to provide full support and bug fixes OpenJDK 8 images and imagestreams under its standard support terms and conditions. Procedure To import the latest imagestreams and templates for the OpenShift image for JBoss EAP XP into your OpenShift project's namespace, use the following commands: Import JDK 11 imagestream: This command imports the following imagestreams and templates: The JDK 11 builder imagestream: jboss-eap-xp4-openjdk11-openshift The JDK 11 runtime imagestream: jboss-eap-xp4-openjdk11-runtime-openshift Import the OpenShift templates: Note The JBoss EAP XP imagestreams and templates imported using the above command are only available within that OpenShift project. If you have administrative access to the general openshift namespace and want the imagestreams and templates to be accessible by all projects, add -n openshift to the oc replace line of the command. For example: If you want to import the imagestreams and templates into a different project, add the -n PROJECT_NAME to the oc replace line of the command. For example: If you use the cluster-samples-operator, see the OpenShift documentation on configuring the cluster samples operator. See Configuring the Cluster Samples Operator for details about configuring the cluster samples operator. 5.4. Deploying a JBoss EAP XP source-to-image (S2I) application on OpenShift Deploy a JBoss EAP XP source-to-image (S2I) application on OpenShift. Prerequisites Optional: A template can specify default values for many template parameters, and you might have to override some, or all, of the defaults. To see template information, including a list of parameters and any default values, use the command oc describe template TEMPLATE_NAME . Procedure Create a new OpenShift application using the JBoss EAP XP image and your Java application's source code. Use one of the provided JBoss EAP XP templates for S2I builds. 1 The template to use. The application image is tagged with the latest tag. 2 The latest images streams and templates were imported into the project's namespace , so you must specify the namespace of where to find the imagestream. This is usually the project's name. 3 URL to the repository containing the application source code. 4 The Git repository reference to use for the source code. This can be a Git branch or tag reference. 5 The directory within the source repository to build. Note A template can specify default values for many template parameters, and you might have to override some, or all, of the defaults. To see template information, including a list of parameters and any default values, use the command oc describe template TEMPLATE_NAME . You might also want to configure environment variables when creating your new OpenShift application. Retrieve the name of the build configurations. Use the name of the build configurations from the step to view the Maven progress of the builds. For example, for the microprofile-config , the following command shows the progress of the Maven builds. Additional resources Importing the latest OpenShift imagestreams and templates for JBoss EAP XP . Preparing OpenShift for application deployment . 5.5. Completing post-deployment tasks for JBoss EAP XP source-to-image (S2I) application Depending on your application, you might need to complete some tasks after your OpenShift application has been built and deployed. Examples of post-deployment tasks include the following: Exposing a service so that the application is viewable from outside of OpenShift. Scaling your application to a specific number of replicas. Procedure Get the service name of your application using the following command. Optional : Expose the main service as a route so you can access your application from outside of OpenShift. For example, for the microprofile-config quickstart, use the following command to expose the required service and port. Note If you used a template to create the application, the route might already exist. If it does, continue on to the step. Get the URL of the route. Access the application in your web browser using the URL. The URL is the value of the HOST/PORT field from command's output. Note For JBoss EAP XP 4.0.0 GA distribution, the Microprofile Config quickstart does not reply to HTTPS GET requests to the application's root context. This enhancement is only available in the {JBossXPShortName101} GA distribution. For example, to interact with the Microprofile Config application, the URL might be http:// HOST_PORT_Value /config/value in your browser. If your application does not use the JBoss EAP root context, append the context of the application to the URL. For example, for the microprofile-config quickstart, the URL might be http:// HOST_PORT_VALUE /microprofile-config/ . Optionally, you can scale up the application instance by running the following command. This command increases the number of replicas to 3. For example, for the microprofile-config quickstart, use the following command to scale up the application. Additional Resources For more information about JBoss EAP XP Quickstarts, see the JBoss EAP XP quickstart . | [
"oc new-project PROJECT_NAME",
"oc new-project eap-demo",
"create -f 1234567_myserviceaccount-secret.yaml",
"secrets link default 1234567-myserviceaccount-pull-secret --for=pull secrets link builder 1234567-myserviceaccount-pull-secret --for=pull",
"replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp4/eap-xp4-openjdk11-image-stream.json",
"replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp4/templates/eap-xp4-basic-s2i.json",
"replace -n openshift --force -f",
"replace -n PROJECT_NAME --force -f",
"oc new-app --template=eap-xp4-basic-s2i \\ 1 -p EAP_IMAGE_NAME=jboss-eap-xp4-openjdk11-openshift:latest -p EAP_RUNTIME_IMAGE_NAME=jboss-eap-xp4-openjdk11-runtime-openshift:latest -p IMAGE_STREAM_NAMESPACE=eap-demo \\ 2 -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts \\ 3 -p SOURCE_REPOSITORY_REF=xp-4.0.x \\ 4 -p CONTEXT_DIR=microprofile-config 5",
"oc get bc -o name",
"oc logs -f buildconfig/USD{APPLICATION_NAME}-build-artifacts ... Push successful oc logs -f buildconfig/USD{APPLICATION_NAME} ... Push successful",
"oc logs -f buildconfig/eap-xp4-basic-app-build-artifacts ... Push successful oc logs -f buildconfig/eap-xp4-basic-app ... Push successful",
"oc get service",
"oc expose service/eap-xp4-basic-app --port=8080",
"oc get route",
"oc scale deploymentconfig DEPLOYMENTCONFIG_NAME --replicas=3",
"oc scale deploymentconfig/eap-xp4-basic-app --replicas=3"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/using-the-openshift-image-for-jboss-eap-xp_default |
Updating Red Hat JBoss Enterprise Application Platform | Updating Red Hat JBoss Enterprise Application Platform Red Hat JBoss Enterprise Application Platform 8.0 Comprehensive instructions for updating Red Hat JBoss Enterprise Application Platform. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/index |
Chapter 2. Downloading Red Hat Enterprise Linux | Chapter 2. Downloading Red Hat Enterprise Linux If you have a Red Hat subscription, you can download ISO image files of the Red Hat Enterprise Linux 7 installation DVD from the Red Hat Customer Portal. If you do not have a subscription, either purchase one or obtain a free evaluation subscription from the Software & Download Center at https://access.redhat.com/downloads/ . There are two basic types of installation media available for the AMD64 and Intel 64 (x86_64), ARM (Aarch64), and IBM Power Systems (ppc64) architectures: Binary DVD A full installation image that boots the installation program and performs the entire installation without additional package repositories. Note Binary DVDs are also available for IBM Z. They can be used to boot the installation program using a SCSI DVD drive or as installation sources. Boot.iso A minimal boot image that boots the installation program but requires access to additional package repositories. Red Hat does not provide the repository; you must create it using the full installation ISO image. Note Supplementary DVD images containing additional packages, such as the IBM Java Runtime Environment and additional virtualization drivers may be available, but they are beyond the scope of this document. If you have a subscription or evaluation subscription, follow these steps to obtain the Red Hat Enterprise Linux 7 ISO image files: Procedure 2.1. Downloading Red Hat Enterprise Linux ISO Images Visit the Customer Portal at https://access.redhat.com/home . If you are not logged in, click LOG IN on the right side of the page. Enter your account credentials when prompted. Click DOWNLOADS at the top of the page. Click Red Hat Enterprise Linux . Ensure that you select the appropriate Product Variant and Architecture for your installation target. By default, Red Hat Enterprise Linux Server and x86_64 are selected. If you are not sure which variant best suits your needs, see http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux . Additionally, a list of packages available for every variant is available in the Red Hat Enterprise Linux 7 Package Manifest . A list of available downloads is displayed; most notably, a minimal Boot ISO image and a full installation Binary DVD ISO image. These files are described above. Additional images can be available, such as preconfigured virtual machine images, which are beyond the scope of this document. Choose the image file that you want to use. You have two ways to download it from the Customer Portal: Click its name to begin downloading it to your computer using your web browser. Right-click the name and then click Copy Link Location or a similar menu item, the exact wording of which depends on the browser that you are using. This action copies the URL of the file to your clipboard, which allows you to use an alternative application to download the file to your computer. This approach is especially useful if your Internet connection is unstable: in that case, you browser might fail to download the whole file, and an attempt to resume the interrupted download process fails because the download link contains an authentication key which is only valid for a short time. Specialized applications such as curl can, however, be used to resume interrupted download attempts from the Customer Portal, which means that you need not download the whole file again and thus you save your time and bandwidth consumption. Procedure 2.2. Using curl to Download Installation Media Make sure the curl package is installed by running the following command as root: If your Linux distribution does not use yum , or if you do not use Linux at all, download the most appropriate software package from the curl web site . Open a terminal window, enter a suitable directory, and type the following command: Replace filename.iso with the ISO image name as displayed in the Customer Portal, such as rhel-server-7.0-x86_64-dvd.iso . This is important because the download link in the Customer Portal contains extra characters which curl would otherwise use in the downloaded file name, too. Then, keep the single quotation mark in front of the parameter, and replace copied_link_location with the link that you have copied from the Customer Portal; copy it again if you copied the commands above in the meantime. Note that in Linux, you can paste the content of the clipboard into the terminal window by middle-clicking anywhere in the window, or by pressing Shift + Insert . Finally, use another single quotation mark after the last parameter, and press Enter to run the command and start transferring the ISO image. The single quotation marks prevent the command line interpreter from misinterpreting any special characters that might be included in the download link. Example 2.1. Downloading an ISO image with curl The following is an example of a curl command line: Note that the actual download link is much longer because it contains complicated identifiers. If your Internet connection does drop before the transfer is complete, refresh the download page in the Customer Portal; log in again if necessary. Copy the new download link, use the same basic curl command line parameters as earlier but be sure to use the new download link, and add -C - to instruct curl to automatically determine where it should continue based on the size of the already downloaded file. Example 2.2. Resuming an interrupted download attempt The following is an example of a curl command line that you use if you have only partially downloaded the ISO image of your choice: Optionally, you can use a checksum utility such as sha256sum to verify the integrity of the image file after the download finishes. All downloads on the Download Red Hat Enterprise Linux page are provided with their checksums for reference: Similar tools are available for Microsoft Windows and Mac OS X . You can also use the installation program to verify the media when starting the installation; see Section 23.2.2, "Verifying Boot Media" for details. After you have downloaded an ISO image file from the Customer Portal, you can: Burn it to a CD or DVD as described in Section 3.1, "Making an Installation CD or DVD" . Use it to create a bootable USB drive; see Section 3.2, "Making Installation USB Media" . Place it on a server to prepare for a network installation. For specific directions, see Section 3.3.3, "Installation Source on a Network" . Place it on a hard drive to use the drive as an installation source. For specific instructions, see Section 3.3.2, "Installation Source on a Hard Drive" . Use it to prepare a Preboot Execution Environment (PXE) server, which allows you to boot the installation system over a network. See Chapter 24, Preparing for a Network Installation for instructions. | [
"yum install curl",
"curl -o filename.iso ' copied_link_location '",
"curl -o rhel-server-7.0-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-server-7.0-x86_64-dvd.iso?_auth_=141...7bf'",
"curl -o rhel-server-7.0-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-server-7.0-x86_64-dvd.iso?_auth_=141...963' -C -",
"sha256sum rhel-server-7.0-x86_64-dvd.iso 85a...46c rhel-server-7.0-x86_64-dvd.iso"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-download-red-hat-enterprise-linux |
Chapter 8. Removing storage devices | Chapter 8. Removing storage devices You can safely remove a storage device from a running system, which helps prevent system memory overload and data loss. Prerequisites Before you remove a storage device, you must ensure that you have enough free system memory due to the increased system memory load during an I/O flush. Use the following commands to view the current memory load and free memory of the system: Red Hat does not recommend removing a storage device on a system where: Free memory is less than 5% of the total memory in more than 10 samples per 100. Swapping is active (non-zero si and so columns in the vmstat command output). 8.1. Safe removal of storage devices Safely removing a storage device from a running system requires a top-to-bottom approach. Start from the top layer, which typically is an application or a file system, and work towards the bottom layer, which is the physical device. You can use storage devices in multiple ways, and they can have different virtual configurations on top of physical devices. For example, you can group multiple instances of a device into a multipath device, make it part of a RAID, or you can make it part of an LVM group. Additionally, devices can be accessed via a file system, or they can be accessed directly such as a "raw" device. While using the top-to-bottom approach, you must ensure that: the device that you want to remove is not in use all pending I/O to the device is flushed the operating system is not referencing the storage device 8.2. Removing block devices and associated metadata To safely remove a block device from a running system, to help prevent system memory overload and data loss you need to first remove metadata from them. Address each layer in the stack, starting with the file system, and proceed to the disk. These actions prevent putting your system into an inconsistent state. Use specific commands that may vary depending on what type of devices you are removing: lvremove , vgremove and pvremove are specific to LVM. For software RAID, run mdadm to remove the array. For more information, see Managing RAID . For block devices encrypted using LUKS, there are specific additional steps. The following procedure will not work for the block devices encrypted using LUKS. For more information, see Encrypting block devices using LUKS . Warning Rescanning the SCSI bus or performing any other action that changes the state of the operating system, without following the procedure documented here can cause delays due to I/O timeouts, devices to be removed unexpectedly, or data loss. Prerequisites You have an existing block device stack containing the file system, the logical volume, and the volume group. You ensured that no other applications or services are using the device that you want to remove. You backed up the data from the device that you want to remove. Optional: If you want to remove a multipath device, and you are unable to access its path devices, disable queueing of the multipath device by running the following command: This enables the I/O of the device to fail, allowing the applications that are using the device to shut down. Note Removing devices with their metadata one layer at a time ensures no stale signatures remain on the disk. Procedure Unmount the file system: Remove the file system: Note If you have added an entry into /etc/fstab file to make a persistent association between the file system and a mount point you should also edit /etc/fstab at this point to remove that entry. Continue with the following steps, depending on the type of the device you want to remove: Remove the logical volume (LV) that contained the file system: If there are no other logical volumes remaining in the volume group (VG), you can safely remove the VG that contained the device: Remove the physical volume (PV) metadata from the PV device(s): Remove the partitions that contained the PVs: Note Follow the steps only if you want to fully wipe the device. Remove the partition table: Note Follow the steps only if you want to physically remove the device. If you are removing a multipath device, execute the following commands: View all the paths to the device: The output of this command is required in a later step. Flush the I/O and remove the multipath device: If the device is not configured as a multipath device, or if the device is configured as a multipath device and you have previously passed I/O to the individual paths, flush any outstanding I/O to all device paths that are used: This is important for devices accessed directly where the umount or vgreduce commands do not flush the I/O. If you are removing a SCSI device, execute the following commands: Remove any reference to the path-based name of the device, such as /dev/sd , /dev/disk/by-path , or the major:minor number, in applications, scripts, or utilities on the system. This ensures that different devices added in the future are not mistaken for the current device. Remove each path to the device from the SCSI subsystem: Here the device-name is retrieved from the output of the multipath -l command, if the device was previously used as a multipath device. Remove the physical device from a running system. Note that the I/O to other devices does not stop when you remove this device. Verification Verify that the devices you intended to remove are not displaying on the output of lsblk command. The following is an example output: Additional resources The multipath(8) , pvremove(8) , vgremove(8) , lvremove(8) , wipefs(8) , parted(8) , blockdev(8) and umount(8) man pages. | [
"vmstat 1 100 free",
"multipathd disablequeueing map multipath-device",
"umount /mnt/mount-point",
"wipefs -a /dev/vg0/myvol",
"lvremove vg0/myvol",
"vgremove vg0",
"pvremove /dev/sdc1",
"wipefs -a /dev/sdc1",
"parted /dev/sdc rm 1",
"wipefs -a /dev/sdc",
"multipath -l",
"multipath -f multipath-device",
"blockdev --flushbufs device",
"echo 1 > /sys/block/ device-name /device/delete",
"lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 5G 0 disk sr0 11:0 1 1024M 0 rom vda 252:0 0 10G 0 disk |-vda1 252:1 0 1M 0 part |-vda2 252:2 0 100M 0 part /boot/efi `-vda3 252:3 0 9.9G 0 part /"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_device_mapper_multipath/removing-storage-devices_configuring-device-mapper-multipath |
Chapter 1. Introduction | Chapter 1. Introduction Migration Toolkit for Runtimes product will be End of Life on September 30th, 2024 All customers using this product should start their transition to Migration Toolkit for Applications . Migration Toolkit for Applications is fully backwards compatible with all features and rulesets available in Migration Toolkit for Runtimes and will be maintained in the long term. 1.1. About the MTR extension for Microsoft Visual Studio Code You can migrate and modernize applications by using the Migration Toolkit for Runtimes (MTR) extension for Microsoft Visual Studio Code. The MTR extension analyzes your projects using customizable rulesets, marks issues in the source code, provides guidance to fix the issues, and offers automatic code replacement, if possible. The MTR extension is also compatible with Visual Studio Codespaces, the Microsoft cloud-hosted development environment. 1.2. About the Migration Toolkit for Runtimes What is the Migration Toolkit for Runtimes? The Migration Toolkit for Runtimes (MTR) is an extensible and customizable rule-based tool that simplifies the migration and modernization of Java applications. MTR examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. MTR supports many migration paths, including the following examples: Upgrading to the latest release of Red Hat JBoss Enterprise Application Platform Migrating from Oracle WebLogic or IBM WebSphere Application Server to Red Hat JBoss Enterprise Application Platform Containerizing applications and making them cloud-ready Migrating from Java Spring Boot to Quarkus Updating from Oracle JDK to OpenJDK Upgrading from OpenJDK 8 to OpenJDK 11 Upgrading from OpenJDK 11 to OpenJDK 17 Upgrading from OpenJDK 17 to OpenJDK 21 Migrating EAP Java applications to Azure Migrating Spring Boot Java applications to Azure For more information about use cases and migration paths, see the MTR for developers web page. How does the Migration Toolkit for Runtimes simplify migration? The Migration Toolkit for Runtimes looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTR generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved. How do I learn more? See the Introduction to the Migration Toolkit for Runtimes to learn more about the features, supported configurations, system requirements, and available tools in the Migration Toolkit for Runtimes. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/visual_studio_code_extension_guide/introduction |
Chapter 11. Using Ruby on Rails | Chapter 11. Using Ruby on Rails Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on OpenShift Container Platform. Warning Go through the whole tutorial to have an overview of all the steps necessary to run your application on the OpenShift Container Platform. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your steps to ensure that all the steps were run correctly. 11.1. Prerequisites Basic Ruby and Rails knowledge. Locally installed version of Ruby 2.0.0+, Rubygems, Bundler. Basic Git knowledge. Running instance of OpenShift Container Platform 4. Make sure that an instance of OpenShift Container Platform is running and is available. Also make sure that your oc CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password. 11.2. Setting up the database Rails applications are almost always used with a database. For local development use the PostgreSQL database. Procedure Install the database: USD sudo yum install -y postgresql postgresql-server postgresql-devel Initialize the database: USD sudo postgresql-setup initdb This command creates the /var/lib/pgsql/data directory, in which the data is stored. Start the database: USD sudo systemctl start postgresql.service When the database is running, create your rails user: USD sudo -u postgres createuser -s rails Note that the user created has no password. 11.3. Writing your application If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application. Procedure Install the Rails gem: USD gem install rails Example output Successfully installed rails-4.3.0 1 gem installed After you install the Rails gem, create a new application with PostgreSQL as your database: USD rails new rails-app --database=postgresql Change into your new application directory: USD cd rails-app If you already have an application, make sure the pg (postgresql) gem is present in your Gemfile . If not, edit your Gemfile by adding the gem: gem 'pg' Generate a new Gemfile.lock with all your dependencies: USD bundle install In addition to using the postgresql database with the pg gem, you also must ensure that the config/database.yml is using the postgresql adapter. Make sure you updated default section in the config/database.yml file, so it looks like this: default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password> Create your application's development and test databases: USD rake db:create This creates development and test database in your PostgreSQL server. 11.3.1. Creating a welcome page Since Rails 4 no longer serves a static public/index.html page in production, you must create a new root page. To have a custom welcome page must do following steps: Create a controller with an index action. Create a view page for the welcome controller index action. Create a route that serves applications root page with the created controller and view. Rails offers a generator that completes all necessary steps for you. Procedure Run Rails generator: USD rails generate controller welcome index All the necessary files are created. edit line 2 in config/routes.rb file as follows: Run the rails server to verify the page is available: USD rails server You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug. 11.3.2. Configuring application for OpenShift Container Platform To have your application communicate with the PostgreSQL database service running in OpenShift Container Platform you must edit the default section in your config/database.yml to use environment variables, which you must define later, upon the database service creation. Procedure Edit the default section in your config/database.yml with pre-defined variables as follows: Sample config/database YAML file <% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %> <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %> <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV["#{db_service}_SERVICE_HOST"] %> port: <%= ENV["#{db_service}_SERVICE_PORT"] %> database: <%= ENV["POSTGRESQL_DATABASE"] %> 11.3.3. Storing your application in Git Building an application in OpenShift Container Platform usually requires that the source code be stored in a git repository, so you must install git if you do not already have it. Prerequisites Install git. Procedure Make sure you are in your Rails application directory by running the ls -1 command. The output of the command should look like: USD ls -1 Example output app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor Run the following commands in your Rails app directory to initialize and commit your code to git: USD git init USD git add . USD git commit -m "initial commit" After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository. Set the remote that points to your git repository: USD git remote add origin [email protected]:<namespace/repository-name>.git Push your application to your remote git repository. USD git push 11.4. Deploying your application to OpenShift Container Platform You can deploy you application to OpenShift Container Platform. After creating the rails-app project, you are automatically switched to the new project namespace. Deploying your application in OpenShift Container Platform involves three steps: Creating a database service from OpenShift Container Platform's PostgreSQL image. Creating a frontend service from OpenShift Container Platform's Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service. Creating a route for your application. Procedure To deploy your Ruby on Rails application, create a new project for the application: USD oc new-project rails-app --description="My Rails application" --display-name="Rails Application" 11.4.1. Creating the database service Your Rails application expects a running database service. For this service use PostgreSQL database image. To create the database service, use the oc new-app command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows: POSTGRESQL_DATABASE POSTGRESQL_USER POSTGRESQL_PASSWORD Setting these variables ensures: A database exists with the specified name. A user exists with the specified name. The user can access the specified database with the specified password. Procedure Create the database service: USD oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password To also set the password for the database administrator, append to the command with: -e POSTGRESQL_ADMIN_PASSWORD=admin_pw Watch the progress: USD oc get pods --watch 11.4.2. Creating the frontend service To bring your application to OpenShift Container Platform, you must specify a repository in which your application lives. Procedure Create the frontend service and specify database related environment variables that were setup when creating the database service: USD oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql With this command, OpenShift Container Platform fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named rails-app . Verify the environment variables have been added by viewing the JSON document of the rails-app deployment config: USD oc get dc rails-app -o json You should see the following section: Example output env": [ { "name": "POSTGRESQL_USER", "value": "username" }, { "name": "POSTGRESQL_PASSWORD", "value": "password" }, { "name": "POSTGRESQL_DATABASE", "value": "db_name" }, { "name": "DATABASE_SERVICE_NAME", "value": "postgresql" } ], Check the build process: USD oc logs -f build/rails-app-1 After the build is complete, look at the running pods in OpenShift Container Platform: USD oc get pods You should see a line starting with myapp-<number>-<hash> , and that is your application running in OpenShift Container Platform. Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this: Manually from the running frontend container: Exec into frontend container with rsh command: USD oc rsh <frontend_pod_id> Run the migration from inside the container: USD RAILS_ENV=production bundle exec rake db:migrate If you are running your Rails application in a development or test environment you do not have to specify the RAILS_ENV environment variable. By adding pre-deployment lifecycle hooks in your template. 11.4.3. Creating a route for your application You can expose a service to create a route for your application. Procedure To expose a service by giving it an externally-reachable hostname like www.example.com use OpenShift Container Platform route. In your case you need to expose the frontend service by typing: USD oc expose service rails-app --hostname=www.example.com Warning Ensure the hostname you specify resolves into the IP address of the router. | [
"sudo yum install -y postgresql postgresql-server postgresql-devel",
"sudo postgresql-setup initdb",
"sudo systemctl start postgresql.service",
"sudo -u postgres createuser -s rails",
"gem install rails",
"Successfully installed rails-4.3.0 1 gem installed",
"rails new rails-app --database=postgresql",
"cd rails-app",
"gem 'pg'",
"bundle install",
"default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>",
"rake db:create",
"rails generate controller welcome index",
"root 'welcome#index'",
"rails server",
"<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>",
"ls -1",
"app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor",
"git init",
"git add .",
"git commit -m \"initial commit\"",
"git remote add origin [email protected]:<namespace/repository-name>.git",
"git push",
"oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"",
"oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password",
"-e POSTGRESQL_ADMIN_PASSWORD=admin_pw",
"oc get pods --watch",
"oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql",
"oc get dc rails-app -o json",
"env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],",
"oc logs -f build/rails-app-1",
"oc get pods",
"oc rsh <frontend_pod_id>",
"RAILS_ENV=production bundle exec rake db:migrate",
"oc expose service rails-app --hostname=www.example.com"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/images/templates-using-ruby-on-rails |
Chapter 20. Workload partitioning in single-node OpenShift | Chapter 20. Workload partitioning in single-node OpenShift In resource-constrained environments, such as single-node OpenShift deployments, use workload partitioning to isolate OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. The minimum number of reserved CPUs required for the cluster management in single-node OpenShift is four CPU Hyper-Threads (HTs). With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition. These pods operate normally within the minimum size CPU configuration. Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition. Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities. The following is an overview of the configurations required for workload partitioning: Workload partitioning that uses /etc/crio/crio.conf.d/01-workload-partitioning pins the OpenShift Container Platform infrastructure pods to a defined cpuset configuration. The performance profile pins cluster services such as systemd and kubelet to the CPUs that are defined in the spec.cpu.reserved field. Note Using the Node Tuning Operator, you can configure the performance profile to also pin system-level apps for a complete workload partitioning configuration on the node. The CPUs that you specify in the performance profile spec.cpu.reserved field and the workload partitioning cpuset field must match. Workload partitioning introduces an extended <workload-type>.workload.openshift.io/cores resource for each defined CPU pool, or workload type . Kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource. When workload partitioning is enabled, the <workload-type>.workload.openshift.io/cores resource allows access to the CPU capacity of the host, not just the default CPU pool. Additional resources For the recommended workload partitioning configuration for single-node OpenShift clusters, see Workload partitioning . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/sno-du-enabling-workload-partitioning-on-single-node-openshift |
3.4. Configure the Maven Repository | 3.4. Configure the Maven Repository To configure the installed Red Hat JBoss Data Grid Maven repository, edit the settings.xml file. The default version of this file is available in the conf directory of your Maven installation. Maven user settings are located in the .m2 sub-directory of the user's home directory. See http://maven.apache.org/settings.html (the Maven documentation) for more information about configuring Maven. See Section B.2, "Maven Repository Configuration Example" to view a sample Maven configuration. Report a bug 3.4.1. Steps After the newest available version of Red Hat JBoss Data Grid is installed and Maven is set up and configured, see Section 9.1, "Create a New Red Hat JBoss Data Grid Project" to learn how to use JBoss Data Grid for the first time. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-configure_the_maven_repository |
Appendix D. Producer configuration parameters | Appendix D. Producer configuration parameters key.serializer Type: class Importance: high Serializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface. value.serializer Type: class Importance: high Serializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface. bootstrap.servers Type: list Default: "" Valid Values: non-null string Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). buffer.memory Type: long Default: 33554432 Valid Values: [0,... ] Importance: high The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception. This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. compression.type Type: string Default: none Importance: high The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none , gzip , snappy , lz4 , or zstd . Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: high Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without setting max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. Note additionally that produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file orthe PEM key specified in `ssl.keystore.key'. This is required for clients only if two-way authentication is configured. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. batch.size Type: int Default: 16384 Valid Values: [0,... ] Importance: medium The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. Note: This setting gives the upper bound of the batch size to be sent. If we have fewer than this many bytes accumulated for this partition, we will 'linger' for the linger.ms time waiting for more records to show up. This linger.ms setting defaults to 0, which means we'll immediately send out a record even the accumulated batch size is under this batch.size setting. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. delivery.timeout.ms Type: int Default: 120000 (2 minutes) Valid Values: [0,... ] Importance: medium An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of request.timeout.ms and linger.ms . linger.ms Type: long Default: 0 Valid Values: [0,... ] Importance: medium The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay-that is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5 , for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load. max.block.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium The configuration controls how long the KafkaProducer's `send() , partitionsFor() , initTransactions() , sendOffsetsToTransaction() , commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this timeout bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout. max.request.size Type: int Default: 1048576 Valid Values: [0,... ] Importance: medium The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this. partitioner.class Type: class Default: org.apache.kafka.clients.producer.internals.DefaultPartitioner Importance: medium A class to use to determine which partition to be send to when produce the records. Available options are: org.apache.kafka.clients.producer.internals.DefaultPartitioner : The default partitioner. This strategy will try sticking to a partition until the batch is full, or linger.ms is up. It works with the strategy: If no partition is specified but a key is present, choose a partition based on a hash of the key If no partition or key is present, choose the sticky partition that changes when the batch is full, or linger.ms is up. org.apache.kafka.clients.producer.RoundRobinPartitioner : This partitioning strategy is that each record in a series of consecutive records will be sent to a different partition(no matter if the 'key' is provided or not), until we run out of partitions and start over again. Note: There's a known issue that will cause uneven distribution when new batch is created. Please check KAFKA-9965 for more detail. org.apache.kafka.clients.producer.UniformStickyPartitioner : This partitioning strategy will try sticking to a partition(no matter if the 'key' is provided or not) until the batch is full, or linger.ms is up. Implementing the org.apache.kafka.clients.producer.Partitioner interface allows you to plug in a custom partitioner. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than replica.lag.time.max.ms (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. acks Type: string Default: all Valid Values: [all, -1, 0, 1] Importance: low The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1 . acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. enable.idempotence Type: boolean Default: true Importance: low When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5 (with message ordering preserved for any allowable value), retries to be greater than 0, and acks must be 'all'. If these values are not explicitly set by the user, suitable values will be chosen. If incompatible values are set, a ConfigException will be thrown. interceptor.classes Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors. max.in.flight.requests.per.connection Type: int Default: 5 Valid Values: [1,... ] Importance: low The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this config is set to be greater than 1 and enable.idempotence is set to false, there is a risk of message re-ordering after a failed send due to retries (i.e., if retries are enabled). metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.max.idle.ms Type: long Default: 300000 (5 minutes) Valid Values: [5000,... ] Importance: low Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the access to it will force a metadata fetch request. metric.reporters Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. transaction.timeout.ms Type: int Default: 60000 (1 minute) Importance: low The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a InvalidTxnTimeoutException error. transactional.id Type: string Default: null Valid Values: non-empty string Importance: low The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, enable.idempotence is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/producer-configuration-parameters-str |
Chapter 26. Finding information on Kafka restarts | Chapter 26. Finding information on Kafka restarts After the Cluster Operator restarts a Kafka pod in an OpenShift cluster, it emits an OpenShift event into the pod's namespace explaining why the pod restarted. For help in understanding cluster behavior, you can check restart events from the command line. Tip You can export and monitor restart events using metrics collection tools like Prometheus. Use the metrics tool with an event exporter that can export the output in a suitable format. 26.1. Reasons for a restart event The Cluster Operator initiates a restart event for a specific reason. You can check the reason by fetching information on the restart event. Table 26.1. Restart reasons Event Description CaCertHasOldGeneration The pod is still using a server certificate signed with an old CA, so needs to be restarted as part of the certificate update. CaCertRemoved Expired CA certificates have been removed, and the pod is restarted to run with the current certificates. CaCertRenewed CA certificates have been renewed, and the pod is restarted to run with the updated certificates. ClientCaCertKeyReplaced The key used to sign clients CA certificates has been replaced, and the pod is being restarted as part of the CA renewal process. ClusterCaCertKeyReplaced The key used to sign the cluster's CA certificates has been replaced, and the pod is being restarted as part of the CA renewal process. ConfigChangeRequiresRestart Some Kafka configuration properties are changed dynamically, but others require that the broker be restarted. FileSystemResizeNeeded The file system size has been increased, and a restart is needed to apply it. KafkaCertificatesChanged One or more TLS certificates used by the Kafka broker have been updated, and a restart is needed to use them. ManualRollingUpdate A user annotated the pod, or the StrimziPodSet set it belongs to, to trigger a restart. PodForceRestartOnError An error occurred that requires a pod restart to rectify. PodHasOldRevision A disk was added or removed from the Kafka volumes, and a restart is needed to apply the change. When using StrimziPodSet resources, the same reason is given if the pod needs to be recreated. PodHasOldRevision The StrimziPodSet that the pod is a member of has been updated, so the pod needs to be recreated. When using StrimziPodSet resources, the same reason is given if a disk was added or removed from the Kafka volumes. PodStuck The pod is still pending, and is not scheduled or cannot be scheduled, so the operator has restarted the pod in a final attempt to get it running. PodUnresponsive AMQ Streams was unable to connect to the pod, which can indicate a broker not starting correctly, so the operator restarted it in an attempt to resolve the issue. 26.2. Restart event filters When checking restart events from the command line, you can specify a field-selector to filter on OpenShift event fields. The following fields are available when filtering events with field-selector . regardingObject.kind The object that was restarted, and for restart events, the kind is always Pod . regarding.namespace The namespace that the pod belongs to. regardingObject.name The pod's name, for example, strimzi-cluster-kafka-0 . regardingObject.uid The unique ID of the pod. reason The reason the pod was restarted, for example, JbodVolumesChanged . reportingController The reporting component is always strimzi.io/cluster-operator for AMQ Streams restart events. source source is an older version of reportingController . The reporting component is always strimzi.io/cluster-operator for AMQ Streams restart events. type The event type, which is either Warning or Normal . For AMQ Streams restart events, the type is Normal . Note In older versions of OpenShift, the fields using the regarding prefix might use an involvedObject prefix instead. reportingController was previously called reportingComponent . 26.3. Checking Kafka restarts Use a oc command to list restart events initiated by the Cluster Operator. Filter restart events emitted by the Cluster Operator by setting the Cluster Operator as the reporting component using the reportingController or source event fields. Prerequisites The Cluster Operator is running in the OpenShift cluster. Procedure Get all restart events emitted by the Cluster Operator: oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator Example showing events returned LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled You can also specify a reason or other field-selector options to constrain the events returned. Here, a specific reason is added: oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError Use an output format, such as YAML, to return more detailed information about one or more events. oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml Example showing detailed events output apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: "2022-05-13T00:22:34.168086Z" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: "2022-05-13T00:22:34Z" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: "432961" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: "" selfLink: "" The following fields are deprecated, so they are not populated for these events: firstTimestamp lastTimestamp source | [
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator",
"LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled",
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError",
"-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml",
"apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: \"2022-05-13T00:22:34.168086Z\" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: \"2022-05-13T00:22:34Z\" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: \"432961\" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: \"\" selfLink: \"\""
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/assembly-deploy-restart-events-str |
Chapter 5. Automation hub | Chapter 5. Automation hub Automation hub enables you to discover and use new certified automation content, such as Ansible Collections, from Red Hat Ansible and Certified Partners. New features and enhancements This release of automation hub provides repository management functionality. With repository management, you can create, edit, delete, and move content between repositories. Bug fixes Fixed an issue in the collection keyword search which was returning an incorrect number of results. Added the ability to set OPT_REFERRALS option for LDAP, so that users can now successfully log in to the automation hub by using their LDAP credentials. Fixed an error on the UI when redhat.openshift collection's core dependency was throwing a 404 Not Found error. Fixed an error such that the deprecated execution environments are now skipped while syncing with registry.redhat.io . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_release_notes/hub-464-intro |
3.4. Listing Logical Networks | 3.4. Listing Logical Networks This Ruby example lists the logical networks. # Get the reference to the root of the services tree: system_service = connection.system_service # Get the reference to the service that manages the # collection of networks: nws_service = system_service.networks_service # Retrieve the list of clusters and for each one # print its name: nws = nws_service.list nws.each do |nw| puts nw.name end In an environment with only the default management network, the example outputs: For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/NetworksService:list . | [
"Get the reference to the root of the services tree: system_service = connection.system_service Get the reference to the service that manages the collection of networks: nws_service = system_service.networks_service Retrieve the list of clusters and for each one print its name: nws = nws_service.list nws.each do |nw| puts nw.name end",
"ovirtmgmt"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/listing_logical_networks |
Part II. Manually installing Red Hat Enterprise Linux | Part II. Manually installing Red Hat Enterprise Linux Setting up a machine for installing Red Hat Enterprise Linux (RHEL) involves several key steps, from booting the installation media to configuring system options. After the installation ISO is booted, you can modify boot settings and monitor installation processes through various consoles and logs. By customizing the system during installation, you ensure that it is tailored to specific needs, and the initial setup process finalizes the configuration for first-time use. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/manually-installing-red-hat-enterprise-linux |
Chapter 7. KSM | Chapter 7. KSM The concept of shared memory is common in modern operating systems. For example, when a program is first started it shares all of its memory with the parent program. When either the child or parent program tries to modify this memory, the kernel allocates a new memory region, copies the original contents and allows the program to modify this new region. This is known as copy on write. KSM is a new Linux feature which uses this concept in reverse. KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest virtual machine, a new page is created for that guest virtual machine. This is useful for virtualization with KVM. When a guest virtual machine is started, it inherits only the memory from the parent qemu-kvm process. Once the guest virtual machine is running, the contents of the guest virtual machine operating system image can be shared when guests are running the same operating system or applications. Note The page deduplication technology (used also by the KSM implementation) may introduce side channels that could potentially be used to leak information across multiple guests. In case this is a concern, KSM can be disabled on a per-guest basis. KSM provides enhanced memory speed and utilization. With KSM, common process data is stored in cache or in main memory. This reduces cache misses for the KVM guests which can improve performance for some applications and operating systems. Secondly, sharing memory reduces the overall memory usage of guests which allows for higher densities and greater utilization of resources. Note Starting in Red Hat Enterprise Linux 6.5, KSM is NUMA aware. This allows it to take NUMA locality into account while coalescing pages, thus preventing performance drops related to pages being moved to a remote node. Red Hat recommends avoiding cross-node memory merging when KSM is in use. If KSM is in use, change the /sys/kernel/mm/ksm/merge_across_nodes tunable to 0 to avoid merging pages across NUMA nodes. Kernel memory accounting statistics can eventually contradict each other after large amounts of cross-node merging. As such, numad can become confused after the KSM daemon merges large amounts of memory. If your system has a large amount of free memory, you may achieve higher performance by turning off and disabling the KSM daemon. Refer to the Red Hat Enterprise Linux Performance Tuning Guide for more information on NUMA. Red Hat Enterprise Linux uses two separate methods for controlling KSM: The ksm service starts and stops the KSM kernel thread. The ksmtuned service controls and tunes the ksm , dynamically managing same-page merging. The ksmtuned service starts ksm and stops the ksm service if memory sharing is not necessary. The ksmtuned service must be told with the retune parameter to run when new guests are created or destroyed. Both of these services are controlled with the standard service management tools. The KSM Service The ksm service is included in the qemu-kvm package. KSM is off by default on Red Hat Enterprise Linux 6. When using Red Hat Enterprise Linux 6 as a KVM host physical machine, however, it is likely turned on by the ksm/ksmtuned services. When the ksm service is not started, KSM shares only 2000 pages. This default is low and provides limited memory saving benefits. When the ksm service is started, KSM will share up to half of the host physical machine system's main memory. Start the ksm service to enable KSM to share more memory. The ksm service can be added to the default startup sequence. Make the ksm service persistent with the chkconfig command. The KSM Tuning Service The ksmtuned service does not have any options. The ksmtuned service loops and adjusts ksm . The ksmtuned service is notified by libvirt when a guest virtual machine is created or destroyed. The ksmtuned service can be tuned with the retune parameter. The retune parameter instructs ksmtuned to run tuning functions manually. Before changing the parameters in the file, there are a few terms that need to be clarified: thres - Activation threshold, in kbytes. A KSM cycle is triggered when the thres value added to the sum of all qemu-kvm processes RSZ exceeds total system memory. This parameter is the equivalent in kbytes of the percentage defined in KSM_THRES_COEF . The /etc/ksmtuned.conf file is the configuration file for the ksmtuned service. The file output below is the default ksmtuned.conf file. KSM Variables and Monitoring KSM stores monitoring data in the /sys/kernel/mm/ksm/ directory. Files in this directory are updated by the kernel and are an accurate record of KSM usage and statistics. The variables in the list below are also configurable variables in the /etc/ksmtuned.conf file as noted below. The /sys/kernel/mm/ksm/ files full_scans Full scans run. pages_shared Total pages shared. pages_sharing Pages presently shared. pages_to_scan Pages not scanned. pages_unshared Pages no longer shared. pages_volatile Number of volatile pages. run Whether the KSM process is running. sleep_millisecs Sleep milliseconds. KSM tuning activity is stored in the /var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings. Deactivating KSM KSM has a performance overhead which may be too large for certain environments or host physical machine systems. KSM can be deactivated by stopping the ksmtuned and the ksm service. Stopping the services deactivates KSM but does not persist after restarting. Persistently deactivate KSM with the chkconfig command. To turn off the services, run the following commands: Important Ensure the swap size is sufficient for the committed RAM even with KSM. KSM reduces the RAM usage of identical or similar guests. Overcommitting guests with KSM without sufficient swap space may be possible but is not recommended because guest virtual machine memory use can result in pages becoming unshared. | [
"service ksm start Starting ksm: [ OK ]",
"chkconfig ksm on",
"service ksmtuned start Starting ksmtuned: [ OK ]",
"Configuration file for ksmtuned. How long ksmtuned should sleep between tuning adjustments KSM_MONITOR_INTERVAL=60 Millisecond sleep between ksm scans for 16Gb server. Smaller servers sleep more, bigger sleep less. KSM_SLEEP_MSEC=10 KSM_NPAGES_BOOST is added to the npages value, when free memory is less than thres . KSM_NPAGES_BOOST=300 KSM_NPAGES_DECAY Value given is subtracted to the npages value, when free memory is greater than thres . KSM_NPAGES_DECAY=-50 KSM_NPAGES_MIN is the lower limit for the npages value. KSM_NPAGES_MIN=64 KSM_NAGES_MAX is the upper limit for the npages value. KSM_NPAGES_MAX=1250 KSM_TRES_COEF - is the RAM percentage to be calculated in parameter thres . KSM_THRES_COEF=20 KSM_THRES_CONST - If this is a low memory system, and the thres value is less than KSM_THRES_CONST , then reset thres value to KSM_THRES_CONST value. KSM_THRES_CONST=2048 uncomment the following to enable ksmtuned debug information LOGFILE=/var/log/ksmtuned DEBUG=1",
"service ksmtuned stop Stopping ksmtuned: [ OK ] service ksm stop Stopping ksm: [ OK ]",
"chkconfig ksm off chkconfig ksmtuned off"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-KSM |
15.4. Starting and Stopping vsftpd | 15.4. Starting and Stopping vsftpd The vsftpd RPM installs the /etc/rc.d/init.d/vsftpd script, which can be accessed using the /sbin/service command. To start the server, as root type: To stop the server, as root type: The restart option is a shorthand way of stopping and then starting vsftpd . This is the most efficient way to make configuration changes take effect after editing the configuration file for vsftpd . To restart the server, as root type: The condrestart ( conditional restart ) option only starts vsftpd if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server, as root type: By default, the vsftpd service does not start automatically at boot time. To configure the vsftpd service to start at boot time, use an initscript utility, such as /sbin/chkconfig , /usr/sbin/ntsysv , or the Services Configuration Tool program. Refer to the chapter titled Controlling Access to Services in System Administrators Guide for more information regarding these tools. 15.4.1. Starting Multiple Copies of vsftpd Sometimes one computer is used to serve multiple FTP domains. This is a technique called multihoming . One way to multihome using vsftpd is by running multiple copies of the daemon, each with its own configuration file. To do this, first assign all relevant IP addresses to network devices or alias network devices on the system. Refer to the chapter titled Network Configuration in System Administrators Guide for more information about configuring network devices and device aliases. Additional information can be found about network configuration scripts in Chapter 8, Network Interfaces . , the DNS server for the FTP domains must be configured to reference the correct machine. If the DNS server is running on Red Hat Enterprise Linux, refer to the chapter titled BIND Configuration in System Administrators Guide for instructions about using the Domain Name Service Configuration Tool ( system-config-bind ). For information about BIND and its configuration files, refer to Chapter 12, Berkeley Internet Name Domain (BIND) . For vsftpd to answer requests on different IP addresses, multiple copies of the daemon must be running. The first copy must be run using the vsftpd initscripts, as outlined in Section 15.4, "Starting and Stopping vsftpd " . This copy uses the standard configuration file, /etc/vsftpd/vsftpd.conf . Each additional FTP site must have a configuration file with a unique name in the /etc/vsftpd/ directory, such as /etc/vsftpd/vsftpd-site-2.conf . Each configuration file must be readable and writable only by root. Within each configuration file for each FTP server listening on an IPv4 network, the following directive must be unique: Replace N.N.N.N with the unique IP address for the FTP site being served. If the site is using IPv6, use the listen_address6 directive instead. Once each additional server has a configuration file, the vsftpd daemon must be launched from a root shell prompt using the following command: In the above command, replace <configuration-file> with the unique name for the server's configuration file, such as /etc/vsftpd/vsftpd-site-2.conf . Other directives to consider altering on a per-server basis are: anon_root local_root vsftpd_log_file xferlog_file For a detailed list of directives available within vsftpd 's configuration file, refer to Section 15.5, " vsftpd Configuration Options" . To configure any additional servers to start automatically at boot time, add the above command to the end of the /etc/rc.local file. | [
"service vsftpd start",
"service vsftpd stop",
"service vsftpd restart",
"service vsftpd condrestart",
"listen_address= N.N.N.N",
"vsftpd /etc/vsftpd/ <configuration-file> &"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ftp-vsftpd-start |
Chapter 2. Installing Automation content navigator on RHEL | Chapter 2. Installing Automation content navigator on RHEL As a content creator, you can install Automation content navigator on Red Hat Enterprise Linux (RHEL) 8 or later. 2.1. Installing Automation content navigator on RHEL from an RPM You can install Automation content navigator on Red Hat Enterprise Linux (RHEL) from an RPM. Prerequisites You have installed RHEL 8 or later. You registered your system with Red Hat Subscription Manager. Note Ensure that you only install the navigator matching your current Red Hat Ansible Automation Platform environment. Procedure Attach the Red Hat Ansible Automation Platform SKU: USD subscription-manager attach --pool=<sku-pool-id> Install Automation content navigator with the following command: # dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-navigator Verification Verify your Automation content navigator installation: USD ansible-navigator --help The following example demonstrates a successful installation: | [
"subscription-manager attach --pool=<sku-pool-id>",
"dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-navigator",
"ansible-navigator --help"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/automation_content_navigator_creator_guide/assembly-installing_on_rhel_ansible-navigator |
23.4. Additional Resources | 23.4. Additional Resources For configuration options not covered here, refer to the following resources. 23.4.1. Installed Documentation dhcpd man page - Describes how the DHCP daemon works. dhcpd.conf man page - Explains how to configure the DHCP configuration file; includes some examples. dhcpd.leases man page - Explains how to configure the DHCP leases file; includes some examples. dhcp-options man page - Explains the syntax for declaring DHCP options in dhcpd.conf ; includes some examples. dhcrelay man page - Explains the DHCP Relay Agent and its configuration options. /usr/share/doc/dhcp-< version >/ - Contains sample files, README files, and release notes for the specific version of the DHCP service. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Dynamic_Host_Configuration_Protocol_DHCP-Additional_Resources |
16.4. The guestfish Shell | 16.4. The guestfish Shell guestfish is an interactive shell that you can use from the command line or from shell scripts to access guest virtual machine file systems. All of the functionality of the libguestfs API is available from the shell. To begin viewing or editing a virtual machine disk image, run the following command, substituting the path to your desired disk image: --ro means that the disk image is opened read-only. This mode is always safe but does not allow write access. Only omit this option when you are certain that the guest virtual machine is not running, or the disk image is not attached to a live guest virtual machine. It is not possible to use libguestfs to edit a live guest virtual machine, and attempting to will result in irreversible disk corruption. /path/to/disk/image is the path to the disk. This can be a file, a host physical machine logical volume (such as /dev/VG/LV), a host physical machine device (/dev/cdrom) or a SAN LUN (/dev/sdf3). Note libguestfs and guestfish do not require root privileges. You only need to run them as root if the disk image being accessed needs root to read or write or both. When you start guestfish interactively, it will display this prompt: At the prompt, type run to initiate the library and attach the disk image. This can take up to 30 seconds the first time it is done. Subsequent starts will complete much faster. Note libguestfs will use hardware virtualization acceleration such as KVM (if available) to speed up this process. Once the run command has been entered, other commands can be used, as the following section demonstrates. 16.4.1. Viewing File Systems with guestfish This section provides information about viewing files with guestfish. 16.4.1.1. Manual Listing and Viewing The list-filesystems command will list file systems found by libguestfs. This output shows a Red Hat Enterprise Linux 4 disk image: This output shows a Windows disk image: Other useful commands are list-devices , list-partitions , lvs , pvs , vfs-type and file . You can get more information and help on any command by typing help command , as shown in the following output: To view the actual contents of a file system, it must first be mounted. This example uses one of the Windows partitions shown in the output ( /dev/vda2 ), which in this case is known to correspond to the C:\ drive: You can use guestfish commands such as ls , ll , cat , more , download and tar-out to view and download files and directories. Note There is no concept of a current working directory in this shell. Unlike ordinary shells, you cannot for example use the cd command to change directories. All paths must be fully qualified starting at the top with a forward slash ( / ) character. Use the Tab key to complete paths. To exit from the guestfish shell, type exit or enter Ctrl+d . 16.4.1.2. Using guestfish inspection Instead of listing and mounting file systems by hand, it is possible to let guestfish itself inspect the image and mount the file systems as they would be in the guest virtual machine. To do this, add the -i option on the command line: Because guestfish needs to start up the libguestfs back end in order to perform the inspection and mounting, the run command is not necessary when using the -i option. The -i option works for many common Linux and Windows guest virtual machines. 16.4.1.3. Accessing a guest virtual machine by name A guest virtual machine can be accessed from the command line when you specify its name as known to libvirt (in other words, as it appears in virsh list --all ). Use the -d option to access a guest virtual machine by its name, with or without the -i option: | [
"guestfish --ro -a /path/to/disk/image",
"guestfish --ro -a /path/to/disk/image Welcome to guestfish, the libguestfs filesystem interactive shell for editing virtual machine filesystems. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs>",
"><fs> run ><fs> list-filesystems /dev/vda1: ext3 /dev/VolGroup00/LogVol00: ext3 /dev/VolGroup00/LogVol01: swap",
"><fs> run ><fs> list-filesystems /dev/vda1: ntfs /dev/vda2: ntfs",
"><fs> help vfs-type NAME vfs-type - get the Linux VFS type corresponding to a mounted device SYNOPSIS vfs-type device DESCRIPTION This command gets the file system type corresponding to the file system on \"device\". For most file systems, the result is the name of the Linux VFS module which would be used to mount this file system if you mounted it without specifying the file system type. For example a string such as \"ext3\" or \"ntfs\".",
"><fs> mount-ro /dev/vda2 / ><fs> ll / total 1834753 drwxrwxrwx 1 root root 4096 Nov 1 11:40 . drwxr-xr-x 21 root root 4096 Nov 16 21:45 .. lrwxrwxrwx 2 root root 60 Jul 14 2009 Documents and Settings drwxrwxrwx 1 root root 4096 Nov 15 18:00 Program Files drwxrwxrwx 1 root root 4096 Sep 19 10:34 Users drwxrwxrwx 1 root root 16384 Sep 19 10:34 Windows",
"guestfish --ro -a /path/to/disk/image -i Welcome to guestfish, the libguestfs filesystem interactive shell for editing virtual machine filesystems. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 4 (Nahant Update 8) /dev/VolGroup00/LogVol00 mounted on / /dev/vda1 mounted on /boot ><fs> ll / total 210 drwxr-xr-x. 24 root root 4096 Oct 28 09:09 . drwxr-xr-x 21 root root 4096 Nov 17 15:10 .. drwxr-xr-x. 2 root root 4096 Oct 27 22:37 bin drwxr-xr-x. 4 root root 1024 Oct 27 21:52 boot drwxr-xr-x. 4 root root 4096 Oct 27 21:21 dev drwxr-xr-x. 86 root root 12288 Oct 28 09:09 etc [etc]",
"guestfish --ro -d GuestName -i"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-the_guestfish_shell |
25.5.3. Managing Queues | 25.5.3. Managing Queues All types of queues can be further configured to match your requirements. You can use several directives to modify both action queues and the main message queue. Currently, there are more than 20 queue parameters available, see the section called "Online Documentation" . Some of these settings are used commonly, others, such as worker thread management, provide closer control over the queue behavior and are reserved for advanced users. With advanced settings, you can optimize rsyslog 's performance, schedule queuing, or modify the behavior of a queue on system shutdown. Limiting Queue Size You can limit the number of messages that queue can contain with the following setting: USD object QueueHighWatermark number Replace object with MainMsg or with Action to use this option to the main message queue or for an action queue respectively. Replace number with a number of enqueued messages. You can set the queue size only as the number of messages, not as their actual memory size. The default queue size is 10,000 messages for the main message queue and ruleset queues, and 1000 for action queues. Disk assisted queues are unlimited by default and can not be restricted with this directive, but you can reserve them physical disk space in bytes with the following settings: USD object QueueMaxDiscSpace number Replace object with MainMsg or with Action . When the size limit specified by number is hit, messages are discarded until sufficient amount of space is freed by dequeued messages. Discarding Messages When a queue reaches a certain number of messages, you can discard less important messages in order to save space in the queue for entries of higher priority. The threshold that launches the discarding process can be set with the so-called discard mark : USD object QueueDiscardMark number Replace object with MainMsg or with Action to use this option to the main message queue or for an action queue respectively. Here, number stands for a number of messages that have to be in the queue to start the discarding process. To define which messages to discard, use: USD object QueueDiscardSeverity priority Replace priority with one of the following keywords (or with a number): debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0). With this setting, both newly incoming and already queued messages with lower than defined priority are erased from the queue immediately after the discard mark is reached. Using Timeframes You can configure rsyslog to process queues during a specific time period. With this option you can, for example, transfer some processing into off-peak hours. To define a time frame, use the following syntax: USD object QueueDequeueTimeBegin hour USD object QueueDequeueTimeEnd hour With hour you can specify hours that bound your time frame. Use the 24-hour format without minutes. Configuring Worker Threads A worker thread performs a specified action on the enqueued message. For example, in the main message queue, a worker task is to apply filter logic to each incoming message and enqueue them to the relevant action queues. When a message arrives, a worker thread is started automatically. When the number of messages reaches a certain number, another worker thread is turned on. To specify this number, use: USD object QueueWorkerThreadMinimumMessages number Replace number with a number of messages that will trigger a supplemental worker thread. For example, with number set to 100, a new worker thread is started when more than 100 messages arrive. When more than 200 messages arrive, the third worker thread starts and so on. However, too many working threads running in parallel becomes ineffective, so you can limit the maximum number of them by using: USD object QueueWorkerThreads number where number stands for a maximum number of working threads that can run in parallel. For the main message queue, the default limit is 1 thread. Once a working thread has been started, it keeps running until an inactivity timeout appears. To set the length of timeout, type: USD object QueueWorkerTimeoutThreadShutdown time Replace time with the duration set in milliseconds. Without this setting, a zero timeout is applied and a worker thread is terminated immediately when it runs out of messages. If you specify time as -1 , no thread will be closed. Batch Dequeuing To increase performance, you can configure rsyslog to dequeue multiple messages at once. To set the upper limit for such dequeueing, use: USD object QueueDequeueBatchSize number Replace number with the maximum number of messages that can be dequeued at once. Note that a higher setting combined with a higher number of permitted working threads results in greater memory consumption. Terminating Queues When terminating a queue that still contains messages, you can try to minimize the data loss by specifying a time interval for worker threads to finish the queue processing: USD object QueueTimeoutShutdown time Specify time in milliseconds. If after that period there are still some enqueued messages, workers finish the current data element and then terminate. Unprocessed messages are therefore lost. Another time interval can be set for workers to finish the final element: USD object QueueTimeoutActionCompletion time In case this timeout expires, any remaining workers are shut down. To save data at shutdown, use: USD object QueueTimeoutSaveOnShutdown time If set, all queue elements are saved to disk before rsyslog terminates. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-managing_queues |
Builds | Builds OpenShift Container Platform 4.12 Builds Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/builds/index |
Chapter 4. Using external storage | Chapter 4. Using external storage Organizations can have databases containing information, passwords, and other credentials. Typically, you cannot migrate existing data storage to a Red Hat build of Keycloak deployment so Red Hat build of Keycloak can federate existing external user databases. Red Hat build of Keycloak supports LDAP and Active Directory, but you can also code extensions for any custom user database by using the Red Hat build of Keycloak User Storage SPI. When a user attempts to log in, Red Hat build of Keycloak examines that user's storage to find that user. If Red Hat build of Keycloak does not find the user, Red Hat build of Keycloak iterates over each User Storage provider for the realm until it finds a match. Data from the external data storage then maps into a standard user model the Red Hat build of Keycloak runtime consumes. This user model then maps to OIDC token claims and SAML assertion attributes. External user databases rarely have the data necessary to support all the features of Red Hat build of Keycloak, so the User Storage Provider can opt to store items locally in Red Hat build of Keycloak user data storage. Providers can import users locally and sync periodically with external data storage. This approach depends on the capabilities of the provider and the configuration of the provider. For example, your external user data storage may not support OTP. The OTP can be handled and stored by Red Hat build of Keycloak, depending on the provider. 4.1. Adding a provider To add a storage provider, perform the following procedure: Procedure Click User Federation in the menu. User federation Select the provider type card from the listed cards. Red Hat build of Keycloak brings you to that provider's configuration page. 4.2. Dealing with provider failures If a User Storage Provider fails, you may not be able to log in and view users in the Admin Console. Red Hat build of Keycloak does not detect failures when using a Storage Provider to look up a user, so it cancels the invocation. If you have a Storage Provider with a high priority that fails during user lookup, the login or user query fails with an exception and will not fail over to the configured provider. Red Hat build of Keycloak searches the local Red Hat build of Keycloak user database first to resolve users before any LDAP or custom User Storage Provider. Consider creating an administrator account stored in the local Red Hat build of Keycloak user database in case of problems connecting to your LDAP and back ends. Each LDAP and custom User Storage Provider has an enable toggle on its Admin Console page. Disabling the User Storage Provider skips the provider when performing queries, so you can view and log in with user accounts in a different provider with lower priority. If your provider uses an import strategy and is disabled, imported users are still available for lookup in read-only mode. When a Storage Provider lookup fails, Red Hat build of Keycloak does not fail over because user databases often have duplicate usernames or duplicate emails between them. Duplicate usernames and emails can cause problems because the user loads from one external data store when the admin expects them to load from another data store. 4.3. Lightweight Directory Access Protocol (LDAP) and Active Directory Red Hat build of Keycloak includes an LDAP/AD provider. You can federate multiple different LDAP servers in one Red Hat build of Keycloak realm and map LDAP user attributes into the Red Hat build of Keycloak common user model. By default, Red Hat build of Keycloak maps the username, email, first name, and last name of the user account, but you can also configure additional mappings . Red Hat build of Keycloak's LDAP/AD provider supports password validation using LDAP/AD protocols and storage, edit, and synchronization modes. 4.3.1. Configuring federated LDAP storage Procedure Click User Federation in the menu. User federation Click Add LDAP providers . Red Hat build of Keycloak brings you to the LDAP configuration page. 4.3.2. Storage mode Red Hat build of Keycloak imports users from LDAP into the local Red Hat build of Keycloak user database. This copy of the user database synchronizes on-demand or through a periodic background task. An exception exists for synchronizing passwords. Red Hat build of Keycloak never imports passwords. Password validation always occurs on the LDAP server. The advantage of synchronization is that all Red Hat build of Keycloak features work efficiently because any required extra per-user data is stored locally. The disadvantage is that each time Red Hat build of Keycloak queries a specific user for the first time, Red Hat build of Keycloak performs a corresponding database insert. You can synchronize the import with your LDAP server. Import synchronization is unnecessary when LDAP mappers always read particular attributes from the LDAP rather than the database. You can use LDAP with Red Hat build of Keycloak without importing users into the Red Hat build of Keycloak user database. The LDAP server backs up the common user model that the Red Hat build of Keycloak runtime uses. If LDAP does not support data that a Red Hat build of Keycloak feature requires, that feature will not work. The advantage of this approach is that you do not have the resource usage of importing and synchronizing copies of LDAP users into the Red Hat build of Keycloak user database. The Import Users switch on the LDAP configuration page controls this storage mode. To import users, toggle this switch to ON . Note If you disable Import Users , you cannot save user profile attributes into the Red Hat build of Keycloak database. Also, you cannot save metadata except for user profile metadata mapped to the LDAP. This metadata can include role mappings, group mappings, and other metadata based on the LDAP mappers' configuration. When you attempt to change the non-LDAP mapped user data, the user update is not possible. For example, you cannot disable the LDAP mapped user unless the user's enabled flag maps to an LDAP attribute. 4.3.3. Edit mode Users and admins can modify user metadata, users through the Account Console , and administrators through the Admin Console. The Edit Mode configuration on the LDAP configuration page defines the user's LDAP update privileges. READONLY You cannot change the username, email, first name, last name, and other mapped attributes. Red Hat build of Keycloak shows an error anytime a user attempts to update these fields. Password updates are not supported. WRITABLE You can change the username, email, first name, last name, and other mapped attributes and passwords and synchronize them automatically with the LDAP store. UNSYNCED Red Hat build of Keycloak stores changes to the username, email, first name, last name, and passwords in Red Hat build of Keycloak local storage, so the administrator must synchronize this data back to LDAP. In this mode, Red Hat build of Keycloak deployments can update user metadata on read-only LDAP servers. This option also applies when importing users from LDAP into the local Red Hat build of Keycloak user database. Note When Red Hat build of Keycloak creates the LDAP provider, Red Hat build of Keycloak also creates a set of initial LDAP mappers . Red Hat build of Keycloak configures these mappers based on a combination of the Vendor , Edit Mode , and Import Users switches. For example, when edit mode is UNSYNCED, Red Hat build of Keycloak configures the mappers to read a particular user attribute from the database and not from the LDAP server. However, if you later change the edit mode, the mapper's configuration does not change because it is impossible to detect if the configuration changes changed in UNSYNCED mode. Decide the Edit Mode when creating the LDAP provider. This note applies to Import Users switch also. 4.3.4. Other configuration options Console Display Name The name of the provider to display in the admin console. Priority The priority of the provider when looking up users or adding a user. Sync Registrations Toggle this switch to ON if you want new users created by Red Hat build of Keycloak added to LDAP. Allow Kerberos authentication Enable Kerberos/SPNEGO authentication in the realm with user data provisioned from LDAP. For more information, see the Kerberos section . Other options Hover the mouse pointer over the tooltips in the Admin Console to see more details about these options. 4.3.5. Connecting to LDAP over SSL When you configure a secure connection URL to your LDAP store (for example, ldaps://myhost.com:636 ), Red Hat build of Keycloak uses SSL to communicate with the LDAP server. Configure a truststore on the Red Hat build of Keycloak server side so that Red Hat build of Keycloak can trust the SSL connection to LDAP - see Configuring a Truststore chapter. The Use Truststore SPI configuration property is deprecated. It should normally be left as Always . 4.3.6. Synchronizing LDAP users to Red Hat build of Keycloak If you set the Import Users option, the LDAP Provider handles importing LDAP users into the Red Hat build of Keycloak local database. The first time a user logs in or is returned as part of a user query (e.g. using the search field in the admin console), the LDAP provider imports the LDAP user into the Red Hat build of Keycloak database. During authentication, the LDAP password is validated. If you want to sync all LDAP users into the Red Hat build of Keycloak database, configure and enable the Sync Settings on the LDAP provider configuration page. Two types of synchronization exist: Periodic Full sync This type synchronizes all LDAP users into the Red Hat build of Keycloak database. The LDAP users already in Red Hat build of Keycloak, but different in LDAP, directly update in the Red Hat build of Keycloak database. Periodic Changed users sync When synchronizing, Red Hat build of Keycloak creates or updates users created or updated after the last sync only. The best way to synchronize is to click Synchronize all users when you first create the LDAP provider, then set up periodic synchronization of changed users. 4.3.7. LDAP mappers LDAP mappers are listeners triggered by the LDAP Provider. They provide another extension point to LDAP integration. LDAP mappers are triggered when: Users log in by using LDAP. Users initially register. The Admin Console queries a user. When you create an LDAP Federation provider, Red Hat build of Keycloak automatically provides a set of mappers for this provider. This set is changeable by users, who can also develop mappers or update/delete existing ones. User Attribute Mapper This mapper specifies which LDAP attribute maps to the attribute of the Red Hat build of Keycloak user. For example, you can configure the mail LDAP attribute to the email attribute in the Red Hat build of Keycloak database. For this mapper implementation, a one-to-one mapping always exists. FullName Mapper This mapper specifies the full name of the user. Red Hat build of Keycloak saves the name in an LDAP attribute (usually cn ) and maps the name to the firstName and lastname attributes in the Red Hat build of Keycloak database. Having cn to contain the full name of the user is common for LDAP deployments. Note When you register new users in Red Hat build of Keycloak and Sync Registrations is ON for the LDAP provider, the fullName mapper permits falling back to the username. This fallback is useful when using Microsoft Active Directory (MSAD). The common setup for MSAD is to configure the cn LDAP attribute as fullName and, at the same time, use the cn LDAP attribute as the RDN LDAP Attribute in the LDAP provider configuration. With this setup, Red Hat build of Keycloak falls back to the username. For example, if you create Red Hat build of Keycloak user "john123" and leave firstName and lastName empty, then the fullname mapper saves "john123" as the value of the cn in LDAP. When you enter "John Doe" for firstName and lastName later, the fullname mapper updates LDAP cn to the "John Doe" value as falling back to the username is unnecessary. Hardcoded Attribute Mapper This mapper adds a hardcoded attribute value to each Red Hat build of Keycloak user linked with LDAP. This mapper can also force values for the enabled or emailVerified user properties. Role Mapper This mapper configures role mappings from LDAP into Red Hat build of Keycloak role mappings. A single role mapper can map LDAP roles (usually groups from a particular branch of the LDAP tree) into roles corresponding to a specified client's realm roles or client roles. You can configure more Role mappers for the same LDAP provider. For example, you can specify that role mappings from groups under ou=main,dc=example,dc=org map to realm role mappings, and role mappings from groups under ou=finance,dc=example,dc=org map to client role mappings of client finance . Hardcoded Role Mapper This mapper grants a specified Red Hat build of Keycloak role to each Red Hat build of Keycloak user from the LDAP provider. Group Mapper This mapper maps LDAP groups from a branch of an LDAP tree into groups within Red Hat build of Keycloak. This mapper also propagates user-group mappings from LDAP into user-group mappings in Red Hat build of Keycloak. MSAD User Account Mapper This mapper is specific to Microsoft Active Directory (MSAD). It can integrate the MSAD user account state into the Red Hat build of Keycloak account state, such as enabled account or expired password. This mapper uses the userAccountControl , and pwdLastSet LDAP attributes, specific to MSAD and are not the LDAP standard. For example, if the value of pwdLastSet is 0 , the Red Hat build of Keycloak user must update their password. The result is an UPDATE_PASSWORD required action added to the user. If the value of userAccountControl is 514 (disabled account), the Red Hat build of Keycloak user is disabled. Certificate Mapper This mapper maps X.509 certificates. Red Hat build of Keycloak uses it in conjunction with X.509 authentication and Full certificate in PEM format as an identity source. This mapper behaves similarly to the User Attribute Mapper , but Red Hat build of Keycloak can filter for an LDAP attribute storing a PEM or DER format certificate. Enable Always Read Value From LDAP with this mapper. User Attribute mappers that map basic Red Hat build of Keycloak user attributes, such as username, firstname, lastname, and email, to corresponding LDAP attributes. You can extend these and provide your own additional attribute mappings. The Admin Console provides tooltips to help with configuring the corresponding mappers. 4.3.8. Password hashing When Red Hat build of Keycloak updates a password, Red Hat build of Keycloak sends the password in plain-text format. This action is different from updating the password in the built-in Red Hat build of Keycloak database, where Red Hat build of Keycloak hashes and salts the password before sending it to the database. For LDAP, Red Hat build of Keycloak relies on the LDAP server to hash and salt the password. By default, LDAP servers such as MSAD, RHDS, or FreeIPA hash and salt passwords. Other LDAP servers such as OpenLDAP or ApacheDS store the passwords in plain-text unless you use the LDAPv3 Password Modify Extended Operation as described in RFC3062 . Enable the LDAPv3 Password Modify Extended Operation in the LDAP configuration page. See the documentation of your LDAP server for more details. Warning Always verify that user passwords are properly hashed and not stored as plaintext by inspecting a changed directory entry using ldapsearch and base64 decode the userPassword attribute value. 4.3.9. Configuring the connection pool For more efficiency when managing LDAP connections and to improve performance when handling multiple connections, you can enable connection pooling. By doing that, when a connection is closed, it will be returned to the pool for future use therefore reducing the cost of creating new connections all the time. The LDAP connection pool configuration is configured using the following system properties: Name Description com.sun.jndi.ldap.connect.pool.authentication A list of space-separated authentication types of connections that may be pooled. Valid types are "none", "simple", and "DIGEST-MD5" com.sun.jndi.ldap.connect.pool.initsize The string representation of an integer that represents the number of connections per connection identity to create when initially creating a connection for the identity com.sun.jndi.ldap.connect.pool.maxsize The string representation of an integer that represents the maximum number of connections per connection identity that can be maintained concurrently com.sun.jndi.ldap.connect.pool.prefsize The string representation of an integer that represents the preferred number of connections per connection identity that should be maintained concurrently com.sun.jndi.ldap.connect.pool.timeout The string representation of an integer that represents the number of milliseconds that an idle connection may remain in the pool without being closed and removed from the pool com.sun.jndi.ldap.connect.pool.protocol A list of space-separated protocol types of connections that may be pooled. Valid types are "plain" and "ssl" com.sun.jndi.ldap.connect.pool.debug A string that indicates the level of debug output to produce. Valid values are "fine" (trace connection creation and removal) and "all" (all debugging information) For more details, see the Java LDAP Connection Pooling Configuration documentation. To set any of these properties, you can set the JAVA_OPTS_APPEND environment variable: export JAVA_OPTS_APPEND=-Dcom.sun.jndi.ldap.connect.pool.initsize=10 -Dcom.sun.jndi.ldap.connect.pool.maxsize=50 4.3.10. Troubleshooting It is useful to increase the logging level to TRACE for the category org.keycloak.storage.ldap . With this setting, many logging messages are sent to the server log in the TRACE level, including the logging for all queries to the LDAP server and the parameters, which were used to send the queries. When you are creating any LDAP question on user forum or JIRA, consider attaching the server log with enabled TRACE logging. If it is too big, the good alternative is to include just the snippet from server log with the messages, which were added to the log during the operation, which causes the issues to you. When you create an LDAP provider, a message appears in the server log in the INFO level starting with: It shows the configuration of your LDAP provider. Before you are asking the questions or reporting bugs, it will be nice to include this message to show your LDAP configuration. Eventually feel free to replace some config changes, which you do not want to include, with some placeholder values. One example is bindDn=some-placeholder . For connectionUrl , feel free to replace it as well, but it is generally useful to include at least the protocol, which was used ( ldap vs ldaps )`. Similarly it can be useful to include the details for configuration of your LDAP mappers, which are displayed with the message like this at the DEBUG level: Note those messages are displayed just with the enabled DEBUG logging. For tracking the performance or connection pooling issues, consider setting the value of property com.sun.jndi.ldap.connect.pool.debug to all . This change adds many additional messages to the server log with the included logging for the LDAP connection pooling. As a result, you can track the issues related to connection pooling or performance. For more details, see Configuring the connection pool. Note After changing the configuration of connection pooling, you may need to restart the Red Hat build of Keycloak server to enforce re-initialization of the LDAP provider connection. If no more messages appear for connection pooling even after server restart, it can indicate that connection pooling does not work with your LDAP server. For the case of reporting LDAP issue, you may consider to attach some part of your LDAP tree with the target data, which causes issues in your environment. For example if login of some user takes lot of time, you can consider attach his LDAP entry showing count of member attributes of various "group" entries. In this case, it might be useful to add if those group entries are mapped to some Group LDAP mapper (or Role LDAP Mapper) in Red Hat build of Keycloak and so on. 4.4. SSSD and FreeIPA Identity Management integration Red Hat build of Keycloak includes the System Security Services Daemon (SSSD) plugin. SSSD is part of the Fedora and Red Hat Enterprise Linux (RHEL), and it provides access to multiple identities and authentication providers. SSSD also provides benefits such as failover and offline support. For more information, see the Red Hat Enterprise Linux Identity Management documentation . SSSD integrates with the FreeIPA identity management (IdM) server, providing authentication and access control. With this integration, Red Hat build of Keycloak can authenticate against privileged access management (PAM) services and retrieve user data from SSSD. For more information about using Red Hat Identity Management in Linux environments, see the Red Hat Enterprise Linux Identity Management documentation . Red Hat build of Keycloak and SSSD communicate through read-only D-Bus interfaces. For this reason, the way to provision and update users is to use the FreeIPA/IdM administration interface. By default, the interface imports the username, email, first name, and last name. Note Red Hat build of Keycloak registers groups and roles automatically but does not synchronize them. Any changes made by the Red Hat build of Keycloak administrator in Red Hat build of Keycloak do not synchronize with SSSD. 4.4.1. FreeIPA/IdM server The FreeIPA Container image is available at Quay.io . To set up the FreeIPA server, see the FreeIPA documentation . Procedure Run your FreeIPA server using this command: docker run --name freeipa-server-container -it \ -h server.freeipa.local -e PASSWORD=YOUR_PASSWORD \ -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ -v /var/lib/ipa-data:/data:Z freeipa/freeipa-server The parameter -h with server.freeipa.local represents the FreeIPA/IdM server hostname. Change YOUR_PASSWORD to a password of your own. After the container starts, change the /etc/hosts file to include: x.x.x.x server.freeipa.local If you do not make this change, you must set up a DNS server. Use the following command to enroll your Linux server in the IPA domain so that the SSSD federation provider starts and runs on Red Hat build of Keycloak: ipa-client-install --mkhomedir -p admin -w password Run the following command on the client to verify the installation is working: kinit admin Enter your password. Add users to the IPA server using this command: USD ipa user-add <username> --first=<first name> --last=<surname> --email=<email address> --phone=<telephoneNumber> --street=<street> --city=<city> --state=<state> --postalcode=<postal code> --password Force set the user's password using kinit. kinit <username> Enter the following to restore normal IPA operation: kdestroy -A kinit admin 4.4.2. SSSD and D-Bus The federation provider obtains the data from SSSD using D-BUS. It authenticates the data using PAM. Procedure Install the sssd-dbus RPM. USD sudo yum install sssd-dbus Run the following provisioning script: USD bin/federation-sssd-setup.sh The script can also be used as a guide to configure SSSD and PAM for Red Hat build of Keycloak. It makes the following changes to /etc/sssd/sssd.conf : [domain/your-hostname.local] ... ldap_user_extra_attrs = mail:mail, sn:sn, givenname:givenname, telephoneNumber:telephoneNumber ... [sssd] services = nss, sudo, pam, ssh, ifp ... [ifp] allowed_uids = root, yourOSUsername user_attributes = +mail, +telephoneNumber, +givenname, +sn The ifp service is added to SSSD and configured to allow the OS user to interrogate the IPA server through this interface. The script also creates a new PAM service /etc/pam.d/keycloak to authenticate users via SSSD: auth required pam_sss.so account required pam_sss.so Run dbus-send to ensure the setup is successful. dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:<username> array:string:mail,givenname,sn,telephoneNumber dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserGroups string:<username> If the setup is successful, each command displays the user's attributes and groups respectively. If there is a timeout or an error, the federation provider running on Red Hat build of Keycloak cannot retrieve any data. This error usually happens because the server is not enrolled in the FreeIPA IdM server, or does not have permission to access the SSSD service. If you do not have permission to access the SSSD service, ensure that the user running the Red Hat build of Keycloak server is in the /etc/sssd/sssd.conf file in the following section: [ifp] allowed_uids = root, yourOSUsername And the ipaapi system user is created inside the host. This user is necessary for the ifp service. Check the user is created in the system. grep ipaapi /etc/passwd ipaapi:x:992:988:IPA Framework User:/:/sbin/nologin 4.4.3. Enabling the SSSD federation provider Red Hat build of Keycloak uses DBus-Java project to communicate at a low level with D-Bus and JNA to authenticate via Operating System Pluggable Authentication Modules (PAM). Although now Red Hat build of Keycloak contains all the needed libraries to run the SSSD provider, JDK version 17 is needed. Therefore the SSSD provider will only be displayed when the host configuration is correct and JDK 17 is used to run Red Hat build of Keycloak. 4.4.4. Configuring a federated SSSD store After the installation, configure a federated SSSD store. Procedure Click User Federation in the menu. If everything is setup successfully the Add Sssd providers button will be displayed in the page. Click on it. Assign a name to the new provider. Click Save . You can now authenticate against Red Hat build of Keycloak using a FreeIPA/IdM user and credentials. 4.5. Custom providers Red Hat build of Keycloak does have a Service Provider Interface (SPI) for User Storage Federation to develop custom providers. You can find documentation on developing customer providers in the Server Developer Guide . | [
"export JAVA_OPTS_APPEND=-Dcom.sun.jndi.ldap.connect.pool.initsize=10 -Dcom.sun.jndi.ldap.connect.pool.maxsize=50",
"Creating new LDAP Store for the LDAP storage provider:",
"Mapper for provider: XXX, Mapper name: YYY, Provider: ZZZ",
"docker run --name freeipa-server-container -it -h server.freeipa.local -e PASSWORD=YOUR_PASSWORD -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /var/lib/ipa-data:/data:Z freeipa/freeipa-server",
"x.x.x.x server.freeipa.local",
"ipa-client-install --mkhomedir -p admin -w password",
"kinit admin",
"ipa user-add <username> --first=<first name> --last=<surname> --email=<email address> --phone=<telephoneNumber> --street=<street> --city=<city> --state=<state> --postalcode=<postal code> --password",
"kinit <username>",
"kdestroy -A kinit admin",
"sudo yum install sssd-dbus",
"bin/federation-sssd-setup.sh",
"[domain/your-hostname.local] ldap_user_extra_attrs = mail:mail, sn:sn, givenname:givenname, telephoneNumber:telephoneNumber [sssd] services = nss, sudo, pam, ssh, ifp [ifp] allowed_uids = root, yourOSUsername user_attributes = +mail, +telephoneNumber, +givenname, +sn",
"auth required pam_sss.so account required pam_sss.so",
"dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:<username> array:string:mail,givenname,sn,telephoneNumber dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserGroups string:<username>",
"[ifp] allowed_uids = root, yourOSUsername",
"grep ipaapi /etc/passwd ipaapi:x:992:988:IPA Framework User:/:/sbin/nologin"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/user-storage-federation |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_eclipse_4.18/making-open-source-more-inclusive |
Chapter 1. Architecture of OpenShift AI Self-Managed | Chapter 1. Architecture of OpenShift AI Self-Managed Red Hat OpenShift AI Self-Managed is an Operator that is available in a self-managed environment, such as Red Hat OpenShift Container Platform, or in Red Hat-managed cloud environments such as Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP), Red Hat OpenShift Service on Amazon Web Services (ROSA Classic or ROSA HCP), or Microsoft Azure Red Hat OpenShift. OpenShift AI integrates the following components and services: At the service layer: OpenShift AI dashboard A customer-facing dashboard that shows available and installed applications for the OpenShift AI environment as well as learning resources such as tutorials, quick starts, and documentation. Administrative users can access functionality to manage users, clusters, notebook images, accelerator profiles, and model-serving runtimes. Data scientists can use the dashboard to create projects to organize their data science work. Model serving Data scientists can deploy trained machine-learning models to serve intelligent applications in production. After deployment, applications can send requests to the model using its deployed API endpoint. Data science pipelines Data scientists can build portable machine learning (ML) workflows with data science pipelines 2.0, using Docker containers. With data science pipelines, data scientists can automate workflows as they develop their data science models. Jupyter (self-managed) A self-managed application that allows data scientists to configure their own notebook server environment and develop machine learning models in JupyterLab. Distributed workloads Data scientists can use multiple nodes in parallel to train machine-learning models or process data more quickly. This approach significantly reduces the task completion time, and enables the use of larger datasets and more complex models. At the management layer: The Red Hat OpenShift AI Operator A meta-operator that deploys and maintains all components and sub-operators that are part of OpenShift AI. Monitoring services Prometheus gathers metrics from OpenShift AI for monitoring purposes. When you install the Red Hat OpenShift AI Operator in the OpenShift cluster, the following new projects are created: The redhat-ods-operator project contains the Red Hat OpenShift AI Operator. The redhat-ods-applications project installs the dashboard and other required components of OpenShift AI. The redhat-ods-monitoring project contains services for monitoring. The rhods-notebooks project is where notebook environments are deployed by default. You or your data scientists must create additional projects for the applications that will use your machine learning models. Do not install independent software vendor (ISV) applications in namespaces associated with OpenShift AI. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed_in_a_disconnected_environment/architecture-of-openshift-ai-self-managed_install |
Chapter 106. KafkaUserAuthorizationSimple schema reference | Chapter 106. KafkaUserAuthorizationSimple schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserAuthorizationSimple type from other subtypes which may be added in the future. It must have the value simple for the type KafkaUserAuthorizationSimple . Property Description type Must be simple . string acls List of ACL rules which should be applied to this user. AclRule array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaUserAuthorizationSimple-reference |
Chapter 24. OpenShiftControllerManager [operator.openshift.io/v1] | Chapter 24. OpenShiftControllerManager [operator.openshift.io/v1] Description OpenShiftControllerManager provides information to configure an operator to manage openshift-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 24.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 24.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 24.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 24.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 24.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 24.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 24.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/openshiftcontrollermanagers DELETE : delete collection of OpenShiftControllerManager GET : list objects of kind OpenShiftControllerManager POST : create an OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} DELETE : delete an OpenShiftControllerManager GET : read the specified OpenShiftControllerManager PATCH : partially update the specified OpenShiftControllerManager PUT : replace the specified OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status GET : read status of the specified OpenShiftControllerManager PATCH : partially update status of the specified OpenShiftControllerManager PUT : replace status of the specified OpenShiftControllerManager 24.2.1. /apis/operator.openshift.io/v1/openshiftcontrollermanagers HTTP method DELETE Description delete collection of OpenShiftControllerManager Table 24.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OpenShiftControllerManager Table 24.2. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManagerList schema 401 - Unauthorized Empty HTTP method POST Description create an OpenShiftControllerManager Table 24.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.4. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.5. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 202 - Accepted OpenShiftControllerManager schema 401 - Unauthorized Empty 24.2.2. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} Table 24.6. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager HTTP method DELETE Description delete an OpenShiftControllerManager Table 24.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 24.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OpenShiftControllerManager Table 24.9. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OpenShiftControllerManager Table 24.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.11. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OpenShiftControllerManager Table 24.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.13. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.14. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty 24.2.3. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status Table 24.15. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager HTTP method GET Description read status of the specified OpenShiftControllerManager Table 24.16. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OpenShiftControllerManager Table 24.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.18. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OpenShiftControllerManager Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/openshiftcontrollermanager-operator-openshift-io-v1 |
6.2. Managing Software Repositories | 6.2. Managing Software Repositories When a system is subscribed to the Red Hat Content Delivery Network, a repository file is created in the /etc/yum.repos.d/ directory. To verify that, use yum to list all enabled repositories: yum repolist Red Hat Subscription Management also allows you to manually enable or disable software repositories provided by Red Hat. To list all available repositories, use the following command: subscription-manager repos --list The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Where variant is the Red Hat Enterprise Linux system variant ( server or workstation ), and version is the Red Hat Enterprise Linux system version ( 6 or 7 ), for example: To enable a repository, enter a command as follows: subscription-manager repos --enable repository Replace repository with a name of the repository to enable. Similarly, to disable a repository, use the following command: subscription-manager repos --disable repository Section 8.4, "Configuring Yum and Yum Repositories" provides detailed information about managing software repositories using yum . | [
"rhel- variant -rhscl- version -rpms rhel- variant -rhscl- version -debug-rpms rhel- variant -rhscl- version -source-rpms",
"rhel-server-rhscl- 6 -eus-rpms rhel-server-rhscl- 6 -eus-source-rpms rhel-server-rhscl- 6 -eus-debug-rpms"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-subscription_and_support-registering_a_system_and_managing_subscriptions-managing_repositories |
14.15.2.9. snapshot-revert domain | 14.15.2.9. snapshot-revert domain Reverts the given domain to the snapshot specified by snapshot , or to the current snapshot with --current . Warning Be aware that this is a destructive action; any changes in the domain since the last snapshot was taken will be lost. Also note that the state of the domain after snapshot-revert is complete will be the state of the domain at the time the original snapshot was taken. To revert the snapshot, run Normally, reverting to a snapshot leaves the domain in the state it was at the time the snapshot was created, except that a disk snapshot with no guest virtual machine state leaves the domain in an inactive state. Passing either the --running or --paused option will perform additional state changes (such as booting an inactive domain, or pausing a running domain). Since transient domains cannot be inactive, it is required to use one of these options when reverting to a disk snapshot of a transient domain. There are two cases where a snapshot revert involves extra risk, which requires the use of --force to proceed. One is the case of a snapshot that lacks full domain information for reverting configuration; since libvirt cannot prove that the current configuration matches what was in use at the time of the snapshot, supplying --force assures libvirt that the snapshot is compatible with the current configuration (and if it is not, the domain will likely fail to run). The other is the case of reverting from a running domain to an active state where a new hypervisor has to be created rather than reusing the existing hypervisor, because it implies drawbacks such as breaking any existing VNC or Spice connections; this condition happens with an active snapshot that uses a provably incompatible configuration, as well as with an inactive snapshot that is combined with the --start or --pause option. | [
"snapshot-revert domain {snapshot | --current} [{--running | --paused}] [--force]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-managing_snapshots-snapshot_revert_domain |
Chapter 11. Adding execution nodes to Red Hat Ansible Automation Platform Operator | Chapter 11. Adding execution nodes to Red Hat Ansible Automation Platform Operator You can enable the Ansible Automation Platform Operator with execution nodes by downloading and installing the install bundle. Prerequisites An automation controller instance. The receptor collection package is installed. The Ansible Automation Platform repository ansible-automation-platform-2.5-for-rhel-{RHEL-RELEASE-NUMBER}-x86_64-rpms is enabled. Procedure Log in to Red Hat Ansible Automation Platform. In the navigation panel, select Automation Execution Infrastructure Instances . Click Add . Input the Execution Node domain name or IP in the Host Name field. Optional: Input the port number in the Listener Port field. Click Save . Click the download icon to Install Bundle . This starts a download, take note of where you save the file Untar the gz file. Note To run the install_receptor.yml playbook you must install the receptor collection from Ansible Galaxy: Ansible-galaxy collection install -r requirements.yml Update the playbook with your user name and SSH private key file. Note that ansible_host pre-populates with the hostname you input earlier. all: hosts: remote-execution: ansible_host: example_host_name # Same with configured in AAP WebUI ansible_user: <username> #user provided Ansible_ssh_private_key_file: ~/.ssh/id_example Open your terminal, and navigate to the directory where you saved the playbook. To install the bundle run: ansible-playbook install_receptor.yml -i inventory.yml When installed you can now upgrade your execution node by downloading and re-running the playbook for the instance you created. Verification To verify receptor service status run the following command: sudo systemctl status receptor.service Make sure the service is in active (running) state To verify if your playbook runs correctly on your new node run the following command: watch podman ps Additional resources For more information about managing instance groups see the Managing Instance Groups section of the Automation Controller User Guide. | [
"all: hosts: remote-execution: ansible_host: example_host_name # Same with configured in AAP WebUI ansible_user: <username> #user provided Ansible_ssh_private_key_file: ~/.ssh/id_example",
"ansible-playbook install_receptor.yml -i inventory.yml",
"sudo systemctl status receptor.service",
"watch podman ps"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/operator-add-execution-nodes_operator-upgrade |
Chapter 33. Using systemd to manage resources used by applications | Chapter 33. Using systemd to manage resources used by applications RHEL 9 moves the resource management settings from the process level to the application level by binding the system of cgroup hierarchies with the systemd unit tree. Therefore, you can manage the system resources with the systemctl command, or by modifying the systemd unit files. To achieve this, systemd takes various configuration options from the unit files or directly via the systemctl command. Then systemd applies those options to specific process groups by using the Linux kernel system calls and features like cgroups and namespaces . Note You can review the full set of configuration options for systemd in the following manual pages: systemd.resource-control(5) systemd.exec(5) 33.1. Role of systemd in resource management The core function of systemd is service management and supervision. The systemd system and service manager : ensures that managed services start at the right time and in the correct order during the boot process. ensures that managed services run smoothly to use the underlying hardware platform optimally. provides capabilities to define resource management policies. provides capabilities to tune various options, which can improve the performance of the service. Important In general, Red Hat recommends you use systemd for controlling the usage of system resources. You should manually configure the cgroups virtual file system only in special cases. For example, when you need to use cgroup-v1 controllers that have no equivalents in cgroup-v2 hierarchy. 33.2. Distribution models of system sources To modify the distribution of system resources, you can apply one or more of the following distribution models: Weights You can distribute the resource by adding up the weights of all sub-groups and giving each sub-group the fraction matching its ratio against the sum. For example, if you have 10 cgroups, each with weight of value 100, the sum is 1000. Each cgroup receives one tenth of the resource. Weight is usually used to distribute stateless resources. For example the CPUWeight= option is an implementation of this resource distribution model. Limits A cgroup can consume up to the configured amount of the resource. The sum of sub-group limits can exceed the limit of the parent cgroup. Therefore it is possible to overcommit resources in this model. For example the MemoryMax= option is an implementation of this resource distribution model. Protections You can set up a protected amount of a resource for a cgroup. If the resource usage is below the protection boundary, the kernel will try not to penalize this cgroup in favor of other cgroups that compete for the same resource. An overcommit is also possible. For example the MemoryLow= option is an implementation of this resource distribution model. Allocations Exclusive allocations of an absolute amount of a finite resource. An overcommit is not possible. An example of this resource type in Linux is the real-time budget. unit file option A setting for resource control configuration. For example, you can configure CPU resource with options like CPUAccounting= , or CPUQuota= . Similarly, you can configure memory or I/O resources with options like AllowedMemoryNodes= and IOAccounting= . 33.3. Allocating system resources using systemd Allocating system resources by using systemd involves creating & managing systemd services and units. This can be configured to start, stop, or restart at specific times or in response to certain system events. Procedure To change the required value of the unit file option of your service, you can adjust the value in the unit file, or use the systemctl command: Check the assigned values for the service of your choice. Set the required value of the CPU time allocation policy option: Verification Check the newly assigned values for the service of your choice. Additional resources systemd.resource-control(5) and systemd.exec(5) man pages on your system 33.4. Overview of systemd hierarchy for cgroups On the backend, the systemd system and service manager uses the slice , the scope , and the service units to organize and structure processes in the control groups. You can further modify this hierarchy by creating custom unit files or using the systemctl command. Also, systemd automatically mounts hierarchies for important kernel resource controllers at the /sys/fs/cgroup/ directory. For resource control, you can use the following three systemd unit types: Service A process or a group of processes, which systemd started according to a unit configuration file. Services encapsulate the specified processes so that they can be started and stopped as one set. Services are named in the following way: Scope A group of externally created processes. Scopes encapsulate processes that are started and stopped by the arbitrary processes through the fork() function and then registered by systemd at runtime. For example, user sessions, containers, and virtual machines are treated as scopes. Scopes are named as follows: Slice A group of hierarchically organized units. Slices organize a hierarchy in which scopes and services are placed. The actual processes are contained in scopes or in services. Every name of a slice unit corresponds to the path to a location in the hierarchy. The dash ( - ) character acts as a separator of the path components to a slice from the -.slice root slice. In the following example: parent-name.slice is a sub-slice of parent.slice , which is a sub-slice of the -.slice root slice. parent-name.slice can have its own sub-slice named parent-name-name2.slice , and so on. The service , the scope , and the slice units directly map to objects in the control group hierarchy. When these units are activated, they map directly to control group paths built from the unit names. The following is an abbreviated example of a control group hierarchy: The example above shows that services and scopes contain processes and are placed in slices that do not contain processes of their own. Additional resources Managing system services with systemctl in Red Hat Enterprise Linux What are kernel resource controllers The systemd.resource-control(5) , systemd.exec(5) , cgroups(7) , fork() , fork(2) manual pages Understanding cgroups 33.5. Listing systemd units Use the systemd system and service manager to list its units. Procedure List all active units on the system with the systemctl utility. The terminal returns an output similar to the following example: UNIT A name of a unit that also reflects the unit position in a control group hierarchy. The units relevant for resource control are a slice , a scope , and a service . LOAD Indicates whether the unit configuration file was properly loaded. If the unit file failed to load, the field provides the state error instead of loaded . Other unit load states are: stub , merged , and masked . ACTIVE The high-level unit activation state, which is a generalization of SUB . SUB The low-level unit activation state. The range of possible values depends on the unit type. DESCRIPTION The description of the unit content and functionality. List all active and inactive units: Limit the amount of information in the output: The --type option requires a comma-separated list of unit types such as a service and a slice , or unit load states such as loaded and masked . Additional resources Managing system services with systemctl in RHEL The systemd.resource-control(5) , systemd.exec(5) manual pages 33.6. Viewing systemd cgroups hierarchy Display control groups ( cgroups ) hierarchy and processes running in specific cgroups . Procedure Display the whole cgroups hierarchy on your system with the systemd-cgls command. The example output returns the entire cgroups hierarchy, where the highest level is formed by slices . Display the cgroups hierarchy filtered by a resource controller with the systemd-cgls < resource_controller > command. The example output lists the services that interact with the selected controller. Display detailed information about a certain unit and its part of the cgroups hierarchy with the systemctl status < system_unit > command. Additional resources systemd.resource-control(5) and cgroups(7) man pages on your system 33.7. Viewing cgroups of processes You can learn which control group ( cgroup ) a process belongs to. Then you can check the cgroup to find which controllers and controller-specific configurations it uses. Procedure To view which cgroup a process belongs to, run the # cat proc/< PID >/cgroup command: The example output relates to a process of interest. In this case, it is a process identified by PID 2467 , which belongs to the example.service unit. You can determine whether the process was placed in a correct control group as defined by the systemd unit file specifications. To display what controllers and respective configuration files the cgroup uses, check the cgroup directory: Note The version 1 hierarchy of cgroups uses a per-controller model. Therefore the output from the /proc/ PID /cgroup file shows, which cgroups under each controller the PID belongs to. You can find the respective cgroups under the controller directories at /sys/fs/cgroup/ <controller_name> / . Additional resources The cgroups(7) manual page What are kernel resource controllers Documentation in the /usr/share/doc/kernel-doc-<kernel_version>/Documentation/admin-guide/cgroup-v2.rst file (after installing the kernel-doc package) 33.8. Monitoring resource consumption View a list of currently running control groups ( cgroups ) and their resource consumption in real-time. Procedure Display a dynamic account of currently running cgroups with the systemd-cgtop command. The example output displays currently running cgroups ordered by their resource usage (CPU, memory, disk I/O load). The list refreshes every 1 second by default. Therefore, it offers a dynamic insight into the actual resource usage of each control group. Additional resources The systemd-cgtop(1) manual page 33.9. Using systemd unit files to set limits for applications The systemd service manager supervises each existing or running unit and creates control groups for them. The units have configuration files in the /usr/lib/systemd/system/ directory. You can manually modify the unit files to: set limits. prioritize. control access to hardware resources for groups of processes. Prerequisites You have the root privileges. Procedure Edit the /usr/lib/systemd/system/example.service file to limit the memory usage of a service: The configuration limits the maximum memory that the processes in a control group cannot exceed. The example.service service is part of such a control group which has imposed limitations. You can use suffixes K, M, G, or T to identify Kilobyte, Megabyte, Gigabyte, or Terabyte as a unit of measurement. Reload all unit configuration files: Restart the service: Verification Check that the changes took effect: The example output shows that the memory consumption was limited at around 1,500 KB. Additional resources Understanding cgroups Managing system services with systemctl in Red Hat Enterprise Linux systemd.resource-control(5) , systemd.exec(5) , and cgroups(7) man pages on your system 33.10. Using systemctl command to set limits to applications CPU affinity settings help you restrict the access of a particular process to some CPUs. Effectively, the CPU scheduler never schedules the process to run on the CPU that is not in the affinity mask of the process. The default CPU affinity mask applies to all services managed by systemd . To configure CPU affinity mask for a particular systemd service, systemd provides CPUAffinity= both as: a unit file option. a configuration option in the [Manager] section of the /etc/systemd/system.conf file. The CPUAffinity= unit file option sets a list of CPUs or CPU ranges that are merged and used as the affinity mask. Procedure To set CPU affinity mask for a particular systemd service using the CPUAffinity unit file option: Check the values of the CPUAffinity unit file option in the service of your choice: As the root user, set the required value of the CPUAffinity unit file option for the CPU ranges used as the affinity mask: Restart the service to apply the changes. Additional resources systemd.resource-control(5) , systemd.exec(5) , cgroups(7) man pages on your system 33.11. Setting global default CPU affinity through manager configuration The CPUAffinity option in the /etc/systemd/system.conf file defines an affinity mask for the process identification number (PID) 1 and all processes forked off of PID1. You can then override the CPUAffinity on a per-service basis. To set the default CPU affinity mask for all systemd services using the /etc/systemd/system.conf file: Set the CPU numbers for the CPUAffinity= option in the [Manager] section of the /etc/systemd/system.conf file. Save the edited file and reload the systemd service: Reboot the server to apply the changes. Additional resources The systemd.resource-control(5) and systemd.exec(5) man pages. 33.12. Configuring NUMA policies using systemd Non-uniform memory access (NUMA) is a computer memory subsystem design, in which the memory access time depends on the physical memory location relative to the processor. Memory close to the CPU has lower latency (local memory) than memory that is local for a different CPU (foreign memory) or is shared between a set of CPUs. In terms of the Linux kernel, NUMA policy governs where (for example, on which NUMA nodes) the kernel allocates physical memory pages for the process. systemd provides unit file options NUMAPolicy and NUMAMask to control memory allocation policies for services. Procedure To set the NUMA memory policy through the NUMAPolicy unit file option: Check the values of the NUMAPolicy unit file option in the service of your choice: As a root, set the required policy type of the NUMAPolicy unit file option: Restart the service to apply the changes. To set a global NUMAPolicy setting using the [Manager] configuration option: Search in the /etc/systemd/system.conf file for the NUMAPolicy option in the [Manager] section of the file. Edit the policy type and save the file. Reload the systemd configuration: Reboot the server. Important When you configure a strict NUMA policy, for example bind , make sure that you also appropriately set the CPUAffinity= unit file option. Additional resources Using systemctl command to set limits to applications The systemd.resource-control(5) , systemd.exec(5) , and set_mempolicy(2) man pages. 33.13. NUMA policy configuration options for systemd Systemd provides the following options to configure the NUMA policy: NUMAPolicy Controls the NUMA memory policy of the executed processes. You can use these policy types: default preferred bind interleave local NUMAMask Controls the NUMA node list that is associated with the selected NUMA policy. Note that you do not have to specify the NUMAMask option for the following policies: default local For the preferred policy, the list specifies only a single NUMA node. Additional resources systemd.resource-control(5) , systemd.exec(5) , and set_mempolicy(2) man pages on your system 33.14. Creating transient cgroups using systemd-run command The transient cgroups set limits on resources consumed by a unit (service or scope) during its runtime. Procedure To create a transient control group, use the systemd-run command in the following format: This command creates and starts a transient service or a scope unit and runs a custom command in such a unit. The --unit=<name> option gives a name to the unit. If --unit is not specified, the name is generated automatically. The --slice=< name >.slice option makes your service or scope unit a member of a specified slice. Replace < name >.slice with the name of an existing slice (as shown in the output of systemctl -t slice ), or create a new slice by passing a unique name. By default, services and scopes are created as members of the system.slice . Replace < command > with the command you want to enter in the service or the scope unit. The following message is displayed to confirm that you created and started the service or the scope successfully: Optional : Keep the unit running after its processes finished to collect runtime information: The command creates and starts a transient service unit and runs a custom command in the unit. The --remain-after-exit option ensures that the service keeps running after its processes have finished. Additional resources The systemd-run(1) manual page 33.15. Removing transient control groups You can use the systemd system and service manager to remove transient control groups ( cgroups ) if you no longer need to limit, prioritize, or control access to hardware resources for groups of processes. Transient cgroups are automatically released when all the processes that a service or a scope unit contains finish. Procedure To stop the service unit with all its processes, enter: To terminate one or more of the unit processes, enter: The command uses the --kill-who option to select process(es) from the control group you want to terminate. To kill multiple processes at the same time, pass a comma-separated list of PIDs. The --signal option determines the type of POSIX signal to be sent to the specified processes. The default signal is SIGTERM . Additional resources What are control groups What are kernel resource controllers systemd.resource-control(5) and cgroups(7) man pages on your system Understanding control groups Managing systemd in RHEL | [
"systemctl show --property < unit file option > < service name >",
"systemctl set-property < service name > < unit file option >=< value >",
"systemctl show --property < unit file option > < service name >",
"<name> .service",
"<name> .scope",
"<parent-name> .slice",
"Control group /: -.slice ├─user.slice │ ├─user-42.slice │ │ ├─session-c1.scope │ │ │ ├─ 967 gdm-session-worker [pam/gdm-launch-environment] │ │ │ ├─1035 /usr/libexec/gdm-x-session gnome-session --autostart /usr/share/gdm/greeter/autostart │ │ │ ├─1054 /usr/libexec/Xorg vt1 -displayfd 3 -auth /run/user/42/gdm/Xauthority -background none -noreset -keeptty -verbose 3 │ │ │ ├─1212 /usr/libexec/gnome-session-binary --autostart /usr/share/gdm/greeter/autostart │ │ │ ├─1369 /usr/bin/gnome-shell │ │ │ ├─1732 ibus-daemon --xim --panel disable │ │ │ ├─1752 /usr/libexec/ibus-dconf │ │ │ ├─1762 /usr/libexec/ibus-x11 --kill-daemon │ │ │ ├─1912 /usr/libexec/gsd-xsettings │ │ │ ├─1917 /usr/libexec/gsd-a11y-settings │ │ │ ├─1920 /usr/libexec/gsd-clipboard ... ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 └─system.slice ├─rngd.service │ └─800 /sbin/rngd -f ├─systemd-udevd.service │ └─659 /usr/lib/systemd/systemd-udevd ├─chronyd.service │ └─823 /usr/sbin/chronyd ├─auditd.service │ ├─761 /sbin/auditd │ └─763 /usr/sbin/sedispatch ├─accounts-daemon.service │ └─876 /usr/libexec/accounts-daemon ├─example.service │ ├─ 929 /bin/bash /home/jdoe/example.sh │ └─4902 sleep 1 ...",
"systemctl UNIT LOAD ACTIVE SUB DESCRIPTION ... init.scope loaded active running System and Service Manager session-2.scope loaded active running Session 2 of user jdoe abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrt-vmcore.service loaded active exited Harvest vmcores for ABRT abrt-xorg.service loaded active running ABRT Xorg log watcher ... -.slice loaded active active Root Slice machine.slice loaded active active Virtual Machine and Container Slice system-getty.slice loaded active active system-getty.slice system-lvm2\\x2dpvscan.slice loaded active active system-lvm2\\x2dpvscan.slice system-sshd\\x2dkeygen.slice loaded active active system-sshd\\x2dkeygen.slice system-systemd\\x2dhibernate\\x2dresume.slice loaded active active system-systemd\\x2dhibernate\\x2dresume> system-user\\x2druntime\\x2ddir.slice loaded active active system-user\\x2druntime\\x2ddir.slice system.slice loaded active active System Slice user-1000.slice loaded active active User Slice of UID 1000 user-42.slice loaded active active User Slice of UID 42 user.slice loaded active active User and Session Slice ...",
"systemctl --all",
"systemctl --type service,masked",
"systemd-cgls Control group /: -.slice ├─user.slice │ ├─user-42.slice │ │ ├─session-c1.scope │ │ │ ├─ 965 gdm-session-worker [pam/gdm-launch-environment] │ │ │ ├─1040 /usr/libexec/gdm-x-session gnome-session --autostart /usr/share/gdm/greeter/autostart ... ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 └─system.slice ... ├─example.service │ ├─6882 /bin/bash /home/jdoe/example.sh │ └─6902 sleep 1 ├─systemd-journald.service └─629 /usr/lib/systemd/systemd-journald ...",
"systemd-cgls memory Controller memory; Control group /: ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 ├─user.slice │ ├─user-42.slice │ │ ├─session-c1.scope │ │ │ ├─ 965 gdm-session-worker [pam/gdm-launch-environment] ... └─system.slice | ... ├─chronyd.service │ └─844 /usr/sbin/chronyd ├─example.service │ ├─8914 /bin/bash /home/jdoe/example.sh │ └─8916 sleep 1 ...",
"systemctl status example.service ● example.service - My example service Loaded: loaded (/usr/lib/systemd/system/example.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-04-16 12:12:39 CEST; 3s ago Main PID: 17737 (bash) Tasks: 2 (limit: 11522) Memory: 496.0K (limit: 1.5M) CGroup: /system.slice/example.service ├─17737 /bin/bash /home/jdoe/example.sh └─17743 sleep 1 Apr 16 12:12:39 redhat systemd[1]: Started My example service. Apr 16 12:12:39 redhat bash[17737]: The current time is Tue Apr 16 12:12:39 CEST 2019 Apr 16 12:12:40 redhat bash[17737]: The current time is Tue Apr 16 12:12:40 CEST 2019",
"cat /proc/2467/cgroup 0::/system.slice/example.service",
"cat /sys/fs/cgroup/system.slice/example.service/cgroup.controllers memory pids ls /sys/fs/cgroup/system.slice/example.service/ cgroup.controllers cgroup.events ... cpu.pressure cpu.stat io.pressure memory.current memory.events ... pids.current pids.events pids.max",
"systemd-cgtop Control Group Tasks %CPU Memory Input/s Output/s / 607 29.8 1.5G - - /system.slice 125 - 428.7M - - /system.slice/ModemManager.service 3 - 8.6M - - /system.slice/NetworkManager.service 3 - 12.8M - - /system.slice/accounts-daemon.service 3 - 1.8M - - /system.slice/boot.mount - - 48.0K - - /system.slice/chronyd.service 1 - 2.0M - - /system.slice/cockpit.socket - - 1.3M - - /system.slice/colord.service 3 - 3.5M - - /system.slice/crond.service 1 - 1.8M - - /system.slice/cups.service 1 - 3.1M - - /system.slice/dev-hugepages.mount - - 244.0K - - /system.slice/dev-mapper-rhel\\x2dswap.swap - - 912.0K - - /system.slice/dev-mqueue.mount - - 48.0K - - /system.slice/example.service 2 - 2.0M - - /system.slice/firewalld.service 2 - 28.8M - -",
"... [Service] MemoryMax=1500K ...",
"systemctl daemon-reload",
"systemctl restart example.service",
"cat /sys/fs/cgroup/system.slice/example.service/memory.max 1536000",
"systemctl show --property <CPU affinity configuration option> <service name>",
"systemctl set-property <service name> CPUAffinity= <value>",
"systemctl restart <service name>",
"systemctl daemon-reload",
"systemctl show --property <NUMA policy configuration option> <service name>",
"systemctl set-property <service name> NUMAPolicy= <value>",
"systemctl restart <service name>",
"systemd daemon-reload",
"systemd-run --unit= <name> --slice= <name> .slice <command>",
"Running as unit <name> .service",
"systemd-run --unit= <name> --slice= <name> .slice --remain-after-exit <command>",
"systemctl stop < name >.service",
"systemctl kill < name >.service --kill-who= PID,... --signal=< signal >"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/assembly_using-systemd-to-manage-resources-used-by-applications_monitoring-and-managing-system-status-and-performance |
Chapter 1. Introduction to performance tuning | Chapter 1. Introduction to performance tuning A JBoss EAP installation is optimized by default. However, configurations to your environment, applications, and use of JBoss EAP subsystems can impact performance, meaning additional configuration might be needed. This guide provides optimization recommendations for common JBoss EAP use cases, as well as instructions for monitoring performance and diagnosing performance issues. Important You should stress test and verify all performance configuration changes under anticipated conditions in a staging or testing environment prior to deploying them to production. 1.1. About the use of EAP_HOME in this document In this document, the variable EAP_HOME is used to denote the path to the JBoss EAP installation. Replace this variable with the actual path to your JBoss EAP installation. If you installed the JBoss EAP compressed file, the install directory is the jboss-eap-8.0 directory where you extracted the compressed archive. If you used the installer to install JBoss EAP, the default path for EAP_HOME is USD{user.home}/EAP-8.0.0 : For Red Hat Enterprise Linux and Solaris: /home/ USER_NAME /EAP-8.0.0/ For Microsoft Windows: C:\Users\ USER_NAME \EAP-8.0.0\ If you used the JBoss Tools installer to install and configure the JBoss EAP server, the default path for EAP_HOME is USD{user.home}/devstudio/runtimes/jboss-eap : For Red Hat Enterprise Linux: /home/ USER_NAME /devstudio/runtimes/jboss-eap/ For Microsoft Windows: C:\Users\ USER_NAME \devstudio\runtimes\jboss-eap or C:\Documents and Settings\ USER_NAME \devstudio\runtimes\jboss-eap\ Note If you set the Target runtime to 8.0 or a later runtime version in JBoss Tools, your project is compatible with the Jakarta EE 8 specification. Note EAP_HOME is not an environment variable. JBOSS_HOME is the environment variable used in scripts. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/performance_tuning_for_red_hat_jboss_enterprise_application_platform/about-performance-tuning_performance-tuning-guide |
Chapter 6. Selecting a container runtime | Chapter 6. Selecting a container runtime The runc and crun are container runtimes and can be used interchangeably as both implement the OCI runtime specification. The crun container runtime has a couple of advantages over runc, as it is faster and requires less memory. Due to that, the crun container runtime is the recommended container runtime for use. 6.1. The runc container runtime The runc container runtime is a lightweight, portable implementation of the Open Container Initiative (OCI) container runtime specification. The runc runtime shares a lot of low-level code with Docker but it is not dependent on any of the components of the Docker platform. The runc supports Linux namespaces, live migration, and has portable performance profiles. It also provides full support for Linux security features such as SELinux, control groups (cgroups), seccomp, and others. You can build and run images with runc, or you can run OCI-compatible images with runc. 6.2. The crun container runtime The crun is a fast and low-memory footprint OCI container runtime written in C. The crun binary is up to 50 times smaller and up to twice as fast as the runc binary. Using crun, you can also set a minimal number of processes when running your container. The crun runtime also supports OCI hooks. Additional features of crun include: Sharing files by group for rootless containers Controlling the stdout and stderr of OCI hooks Running older versions of systemd on cgroup v2 A C library that is used by other programs Extensibility Portability Additional resources An introduction to crun, a fast and low-memory footprint container runtime 6.3. Running containers with runc and crun With runc or crun, containers are configured using bundles. A bundle for a container is a directory that includes a specification file named config.json and a root filesystem. The root filesystem contains the contents of the container. Note The <runtime> can be crun or runc. Prerequisites The container-tools module is installed. Procedure Pull the registry.access.redhat.com/ubi8/ubi container image: Export the registry.access.redhat.com/ubi8/ubi image to the rhel.tar archive: Create the bundle/rootfs directory: Extract the rhel.tar archive into the bundle/rootfs directory: Create a new specification file named config.json for the bundle: The -b option specifies the bundle directory. The default value is the current directory. Optional: Change the settings: Create an instance of a container named myubi for a bundle: Start a myubi container: Note The name of a container instance must be unique to the host. To start a new instance of a container: # <runtime> start <container_name> Verification List containers started by <runtime> : Additional resources crun and runc man pages on your system An introduction to crun, a fast and low-memory footprint container runtime 6.4. Temporarily changing the container runtime You can use the podman run command with the --runtime option to change the container runtime. Note The <runtime> can be crun or runc. Prerequisites The container-tools module is installed. Procedure Pull the registry.access.redhat.com/ubi8/ubi container image: Change the container runtime using the --runtime option: Optional: List all images: Verification Ensure that the OCI runtime is set to <runtime> in the myubi container: Additional resources An introduction to crun, a fast and low-memory footprint container runtime 6.5. Permanently changing the container runtime You can set the container runtime and its options in the /etc/containers/containers.conf configuration file as a root user or in the USDHOME/.config/containers/containers.conf configuration file as a non-root user. Note The <runtime> can be crun or runc runtime. Prerequisites The container-tools module is installed. Procedure Change the runtime in the /etc/containers/containers.conf file: Run the container named myubi: Verification Ensure that the OCI runtime is set to <runtime> in the myubi container: Additional resources An introduction to crun, a fast and low-memory footprint container runtime containers.conf man page on your system | [
"podman pull registry.access.redhat.com/ubi8/ubi",
"podman export USD(podman create registry.access.redhat.com/ubi8/ubi) > rhel.tar",
"mkdir -p bundle/rootfs",
"tar -C bundle/rootfs -xf rhel.tar",
"<runtime> spec -b bundle",
"vi bundle/config.json",
"<runtime> create -b bundle/ myubi",
"<runtime> start myubi",
"<runtime> list ID PID STATUS BUNDLE CREATED OWNER myubi 0 stopped /root/bundle 2021-09-14T09:52:26.659714605Z root",
"podman pull registry.access.redhat.com/ubi8/ubi",
"podman run --name=myubi -dt --runtime=<runtime> ubi8 e4654eb4df12ac031f1d0f2657dc4ae6ff8eb0085bf114623b66cc664072e69b",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e4654eb4df12 registry.access.redhat.com/ubi8:latest bash 4 seconds ago Up 4 seconds ago myubi",
"podman inspect myubi --format \"{{.OCIRuntime}}\" <runtime>",
"vim /etc/containers/containers.conf [engine] runtime = \" <runtime> \"",
"podman run --name=myubi -dt ubi8 bash Resolved \"ubi8\" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi8:latest... Storing signatures",
"podman inspect myubi --format \"{{.OCIRuntime}}\" <runtime>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/selecting-a-container-runtime_building-running-and-managing-containers |
14.3. The (Non-transactional) CarMart Quickstart Using JBoss EAP | 14.3. The (Non-transactional) CarMart Quickstart Using JBoss EAP The Carmart (non-transactional) quickstart is supported for JBoss Data Grid's Library Mode with the JBoss EAP container. Report a bug 14.3.1. Quickstart Prerequisites The prerequisites for this quickstart are as follows: Java 6.0 (Java SDK 1.6) or better JBoss Enterprise Application Platform 6.x or JBoss Enterprise Web Server 2.x Maven 3.0 or better Configure the Maven Repository. For details, see Chapter 3, Install and Use the Maven Repositories Report a bug 14.3.2. Build and Deploy the CarMart Quickstart to JBoss EAP The following procedure provides directions to build and deploy the CarMart application to JBoss EAP. Prerequisites Prerequisites for this procedure are as follows: Obtain the supported JBoss Data Grid Library Mode distribution files. Ensure that the JBoss Data Grid and JBoss Enterprise Application Platform Maven repositories are installed and configured. For details, see Chapter 3, Install and Use the Maven Repositories Select a JBoss server to use (JBoss Enterprise Application Platform 6 (or better) or JBoss EAP 6 (or better). Procedure 14.1. Build and Deploy CarMart to JBoss EAP Start JBoss EAP Depending on your operating system, use the appropriate command from the following to start the first instance of your selected application server: For Linux users: For Windows users: Navigate to the Root Directory Open a command line and navigate to the root directory of this quickstart. Build and Deploy the Application Use the following command to build and deploy the application using Maven: Result The target war file ( target/ jboss-carmart.war ) is deployed to the running instance of JBoss EAP. Report a bug 14.3.3. View the CarMart Quickstart on JBoss EAP The following procedure outlines how to view the CarMart quickstart on JBoss EAP: Prerequisite The CarMart quickstart must be built and deployed to be viewed. Procedure 14.2. View the CarMart Quickstart on JBoss EAP To view the application, use your browser to navigate to the following link: Report a bug 14.3.4. Remove the CarMart Quickstart from JBoss EAP The following procedure provides directions to remove a deployed application from JBoss EAP. Procedure 14.3. Remove an Application from JBoss EAP To remove an application, use the following from the root directory of this quickstart: Report a bug | [
"USDJBOSS_HOME/bin/standalone.sh",
"USDJBOSS_HOME\\bin\\standalone.bat",
"mvn clean package jboss-as:deploy",
"http://localhost:8080/jboss-carmart",
"mvn jboss-as:undeploy"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-the_non-transactional_carmart_quickstart_using_jboss_eap |
25.2. An Overview of Security-Related Packages | 25.2. An Overview of Security-Related Packages To enable the secure server, you must have the following packages installed at a minimum: httpd The httpd package contains the httpd daemon and related utilities, configuration files, icons, Apache HTTP Server modules, man pages, and other files used by the Apache HTTP Server. mod_ssl The mod_ssl package includes the mod_ssl module, which provides strong cryptography for the Apache HTTP Server via the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. openssl The openssl package contains the OpenSSL toolkit. The OpenSSL toolkit implements the SSL and TLS protocols, and also includes a general purpose cryptography library. Additionally, other software packages provide certain security functionalities (but are not required by the secure server to function): httpd-devel The httpd-devel package contains the Apache HTTP Server include files, header files, and the APXS utility. You need all of these if you intend to load any extra modules, other than the modules provided with this product. Refer to the Reference Guide for more information on loading modules onto your secure server using Apache's dynamic shared object (DSO) functionality. If you do not intend to load other modules onto your Apache HTTP Server, you do not need to install this package. OpenSSH packages The OpenSSH packages provide the OpenSSH set of network connectivity tools for logging into and executing commands on a remote machine. OpenSSH tools encrypt all traffic (including passwords), so you can avoid eavesdropping, connection hijacking, and other attacks on the communications between your machine and the remote machine. The openssh package includes core files needed by both the OpenSSH client programs and the OpenSSH server. The openssh package also contains scp , a secure replacement for rcp (for securely copying files between machines). The openssh-askpass package supports the display of a dialog window which prompts for a password during use of the OpenSSH agent. The openssh-askpass-gnome package can be used in conjunction with the GNOME desktop environment to display a graphical dialog window when OpenSSH programs prompt for a password. If you are running GNOME and using OpenSSH utilities, you should install this package. The openssh-server package contains the sshd secure shell daemon and related files. The secure shell daemon is the server side of the OpenSSH suite and must be installed on your host to allow SSH clients to connect to your host. The openssh-clients package contains the client programs needed to make encrypted connections to SSH servers, including the following: ssh , a secure replacement for rsh ; sftp , a secure replacement for ftp (for transferring files between machines); and slogin , a secure replacement for rlogin (for remote login) and telnet (for communicating with another host via the Telnet protocol). For more information about OpenSSH, see Chapter 20, OpenSSH , the Reference Guide , and the OpenSSH website at http://www.openssh.com/ . openssl-devel The openssl-devel package contains the static libraries and the include file needed to compile applications with support for various cryptographic algorithms and protocols. You need to install this package only if you are developing applications which include SSL support - you do not need this package to use SSL. stunnel The stunnel package provides the Stunnel SSL wrapper. Stunnel supports the SSL encryption of TCP connections. It provides encryption for non-SSL aware daemons and protocols (such as POP, IMAP, and LDAP) without requiring any changes to the daemon's code. Note Newer implementations of various daemons now provide their services natively over SSL, such as dovecot or OpenLDAP's slapd server, which may be more desirable than using stunnel . For example, use of stunnel only provides wrapping of protocols, while the native support in OpenLDAP's slapd can also handle in-band upgrades for using encryption in response to a StartTLS client request. Table 25.1, "Security Packages" displays a summary of the secure server packages and whether each package is optional for the installation of a secure server. Table 25.1. Security Packages Package Name Optional? httpd no mod_ssl no openssl no httpd-devel yes openssh yes openssh-askpass yes openssh-askpass-gnome yes openssh-clients yes openssh-server yes openssl-devel yes stunnel yes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/apache_http_secure_server_configuration-an_overview_of_security_related_packages |
Chapter 2. Red Hat JBoss Enterprise Application Platform setups | Chapter 2. Red Hat JBoss Enterprise Application Platform setups You can configure a JBoss EAP instance on single servers that run one or more applications, or you can configure multiple JBoss EAP server instances that are clustered together with an external load balancer for load balancing and failover. 2.1. Simple setup using a single JBoss EAP server instance A simple JBoss EAP setup consists of a single server running one or more deployed applications. Figure 2.1. Simple setup using a single JBoss EAP server instance The JBoss EAP instance uses the datasources subsystem to connect to the following components: A database A Kerberos server JBoss EAP uses the elytron (security) subsystem to connect to the Kerberos server and expose the server to the two deployed applications. JBoss EAP uses the undertow subsystem to handle requests from the client server and send requests to an appropriate application. The application uses the APIs exposed by JBoss EAP to connect to the database and Kerberos server. The application completes its task and the undertow subsystem sends the response back to requester. 2.2. Complex setup using multiple JBoss EAP server instances A complex setup may involve multiple JBoss EAP server instances. For example, you can use a load balancer to distribute the processing load across {JBoss EAP} instances in a managed domain. The following diagram displays three JBoss EAP instances that are arranged by a load balancer in a managed domain: Figure 2.2. Complex setup using multiple JBoss EAP server instances In this example, the administrator configured each instance to use mod_cluster and infinispan session replication to provide high availability (HA) support for applications. Each instance includes the following components: A web application A web service A deployed enterprise bean A database connection that was established with the datasources subsystem A connection with the LDAP server that was established with the elytron (security) subsystem The diagram displays the following configurations associated with a complex JBoss EAP setup: EAP 1 has a messaging-activemq subsystem that is configured with a Jakarta Messaging queue that connects to an external message broker. The external message broker is shared among all running JBoss EAP instances. All inbound requests go through the load balancer. Depending on the configued load-balancing algorithm and the information provided by each JBoss EAP instance, the load balancer directs the requests to the appropriate JBoss EAP instance. Each JBoss EAP instance uses the undertow subsystem to direct the requests to the appropriate application. Each application uses the APIs exposed by JBoss EAP to connect to the database and Kerberos server. After an application performs its work, the undertow subsystem to send a response to the requester. Note The infinispan subsystem propagates non-persisted information, such as session information, among the JBoss EAP instances. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/introduction_to_red_hat_jboss_enterprise_application_platform/assembly_eap-setups_assembly-intro-eap |
Chapter 15. Troubleshooting IdM client installation | Chapter 15. Troubleshooting IdM client installation The following sections describe how to gather information about a failing IdM client installation, and how to resolve common installation issues. 15.1. Reviewing IdM client installation errors When you install an Identity Management (IdM) client, debugging information is appended to /var/log/ipaclient-install.log . If a client installation fails, the installer logs the failure and rolls back changes to undo any modifications to the host. The reason for the installation failure may not be at the end of the log file, as the installer also logs the roll back procedure. To troubleshoot a failing IdM client installation, review lines labeled ScriptError in the /var/log/ipaclient-install.log file and use this information to resolve any corresponding issues. Prerequisites You must have root privileges to display the contents of IdM log files. Procedure Use the grep utility to retrieve any occurrences of the keyword ScriptError from the /var/log/ipaserver-install.log file. To review a log file interactively, open the end of the log file using the less utility and use the ^ and v arrow keys to navigate. Additional resources If you are unable to resolve a failing IdM client installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the client. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 15.2. Resolving issues if the client installation fails to update DNS records The IdM client installer issues nsupdate commands to create PTR, SSHFP, and additional DNS records. However, the installation process fails if the client is unable to update DNS records after installing and configuring the client software. To fix this problem, verify the configuration and review DNS errors in /var/log/client-install.log . Prerequisites You are using IdM DNS as the DNS solution for your IdM environment Procedure Ensure that dynamic updates for the DNS zone the client is in are enabled: Ensure that the IdM server running the DNS service has port 53 opened for both TCP and UDP protocols. Use the grep utility to retrieve the contents of nsupdate commands from /var/log/client-install.log to see which DNS record updates are failing. Additional resources If you are unable to resolve a failing installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the client. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 15.3. Resolving issues if the client installation fails to join the IdM Kerberos realm The IdM client installation process fails if the client is unable to join the IdM Kerberos realm. This failure can be caused by an empty Kerberos keytab. Prerequisites Removing system files requires root privileges. Procedure Remove /etc/krb5.keytab . Retry the IdM client installation. Additional resources If you are unable to resolve a failing installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the client. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 15.4. Resolving issues if the client installation fails to configure automount In RHEL 7, you could configure an automount location for your client during the client installation. In RHEL 8, running the ipa-client-install command with the --automount-location <raleigh> fails to configure the automount location. However, as the rest of the installation is successful, running /usr/sbin/ipa-client-automount <raleigh> after the installation configures an automount location for the client correctly. Prerequisites With the exception of configuring an automount location, the IdM client installation proceeded correctly. The CLI output was: Procedure Configure the automount location: Additional resources man ipa-client-automount 15.5. Additional resources To troubleshoot installing the first IdM server, see Troubleshooting IdM server installation . To troubleshoot installing an IdM replica, see Troubleshooting IdM replica installation . | [
"[user@server ~]USD sudo grep ScriptError /var/log/ipaclient-install.log [sudo] password for user: 2020-05-28T18:24:50Z DEBUG The ipa-client-install command failed, exception: ScriptError : One of password / principal / keytab is required.",
"[user@server ~]USD sudo less -N +G /var/log/ipaclient-install.log",
"[user@server ~]USD ipa dnszone-mod idm.example.com. --dynamic-update=TRUE",
"[user@server ~]USD sudo firewall-cmd --permanent --add-port=53/tcp --add-port=53/udp [sudo] password for user: success [user@server ~]USD firewall-cmd --runtime-to-permanent success",
"[user@server ~]USD sudo grep nsupdate /var/log/ipaclient-install.log",
"Joining realm failed: Failed to add key to the keytab child exited with 11 Installation failed. Rolling back changes.",
"[user@client ~]USD sudo rm /etc/krb5.keytab [sudo] password for user: [user@client ~]USD ls /etc/krb5.keytab ls: cannot access '/etc/krb5.keytab': No such file or directory",
"The ipa-client-install command was successful.",
"/usr/sbin/ipa-client-automount -U --location <raleigh>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/troubleshooting-idm-client-installation_installing-identity-management |
function::remote_id | function::remote_id Name function::remote_id - The index of this instance in a remote execution. Synopsis Arguments None Description This function returns a number 0..N, which is the unique index of this particular script execution from a swarm of " stap --remote A --remote B ... " runs, and is the same number " stap --remote-prefix " would print. The function returns -1 if the script was not launched with " stap --remote " , or if the remote staprun/stapsh are older than version 1.7. | [
"remote_id:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-remote-id |
Chapter 8. Updating Logging | Chapter 8. Updating Logging There are two types of logging updates: minor release updates (5.y.z) and major release updates (5.y). 8.1. Minor release updates If you installed the logging Operators using the Automatic update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps. If you installed the logging Operators using the Manual update approval option, you must manually approve minor version updates. For more information, see Manually approving a pending Operator update . 8.2. Major release updates For major version updates you must complete some manual steps. For major release version compatibility and support information, see OpenShift Operator Life Cycles . 8.3. Upgrading the Red Hat OpenShift Logging Operator to watch all namespaces In logging 5.7 and older versions, the Red Hat OpenShift Logging Operator only watches the openshift-logging namespace. If you want the Red Hat OpenShift Logging Operator to watch all namespaces on your cluster, you must redeploy the Operator. You can complete the following procedure to redeploy the Operator without deleting your logging components. Prerequisites You have installed the OpenShift CLI ( oc ). You have administrator permissions. Procedure Delete the subscription by running the following command: USD oc -n openshift-logging delete subscription <subscription> Delete the Operator group by running the following command: USD oc -n openshift-logging delete operatorgroup <operator_group_name> Delete the cluster service version (CSV) by running the following command: USD oc delete clusterserviceversion cluster-logging.<version> Redeploy the Red Hat OpenShift Logging Operator by following the "Installing Logging" documentation. Verification Check that the targetNamespaces field in the OperatorGroup resource is not present or is set to an empty string. To do this, run the following command and inspect the output: USD oc get operatorgroup <operator_group_name> -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - "" # ... 8.4. Updating the Red Hat OpenShift Logging Operator To update the Red Hat OpenShift Logging Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-logging project. Click the Red Hat OpenShift Logging Operator. Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.9 , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.9 , and click Save . Note the cluster-logging.v5.9.<z> version. Wait for a few seconds, and then go to Operators Installed Operators to verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v5.9.<z> version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Check if the LokiStack custom resource contains the v13 schema version and add it if it is missing. For correctly adding the v13 schema version, see "Upgrading the LokiStack storage schema". 8.5. Updating the Loki Operator To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Loki Operator. You have administrator permissions. You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-operators-redhat project. Click the Loki Operator . Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y , and click Save . Note the loki-operator.v5.y.z version. Wait for a few seconds, then click Operators Installed Operators . Verify that the Loki Operator version matches the latest loki-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Check if the LokiStack custom resource contains the v13 schema version and add it if it is missing. For correctly adding the v13 schema version, see "Upgrading the LokiStack storage schema". 8.6. Upgrading the LokiStack storage schema If you are using the Red Hat OpenShift Logging Operator with the Loki Operator, the Red Hat OpenShift Logging Operator 5.9 or later supports the v13 schema version in the LokiStack custom resource. Upgrading to the v13 schema version is recommended because it is the schema version to be supported going forward. Procedure Add the v13 schema version in the LokiStack custom resource as follows: apiVersion: loki.grafana.com/v1 kind: LokiStack # ... spec: # ... storage: schemas: # ... version: v12 1 - effectiveDate: "<yyyy>-<mm>-<future_dd>" 2 version: v13 # ... 1 Do not delete. Data persists in its original schema version. Keep the schema versions to avoid data loss. 2 Set a future date that has not yet started in the Coordinated Universal Time (UTC) time zone. Tip To edit the LokiStack custom resource, you can run the oc edit command: USD oc edit lokistack <name> -n openshift-logging Verification On or after the specified effectiveDate date, check that there is no LokistackSchemaUpgradesRequired alert in the web console in Administrator Observe Alerting . 8.7. Updating the OpenShift Elasticsearch Operator To update the OpenShift Elasticsearch Operator to the current version, you must modify the subscription. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Prerequisites If you are using Elasticsearch as the default log store, and Kibana as the UI, update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. Important If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To fix this issue, delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again. The Logging status is healthy: All pods have a ready status. The Elasticsearch cluster is healthy. Your Elasticsearch and Kibana data is backed up . You have administrator permissions. You have installed the OpenShift CLI ( oc ) for the verification steps. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select the openshift-operators-redhat project. Click OpenShift Elasticsearch Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.y and click Save . Note the elasticsearch-operator.v5.y.z version. Wait for a few seconds, then click Operators Installed Operators . Verify that the OpenShift Elasticsearch Operator version matches the latest elasticsearch-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Verification Verify that all Elasticsearch pods have a Ready status by entering the following command and observing the output: USD oc get pod -n openshift-logging --selector component=elasticsearch Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m Verify that the Elasticsearch cluster status is green by entering the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health Example output { "cluster_name" : "elasticsearch", "status" : "green", } Verify that the Elasticsearch cron jobs are created by entering the following commands and observing the output: USD oc project openshift-logging USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s Verify that the log store is updated to the correct version and the indices are green by entering the following command and observing the output: USD oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices Verify that the output includes the app-00000x , infra-00000x , audit-00000x , .security indices: Example 8.1. Sample output with indices in a green status Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0 Verify that the log visualizer is updated to the correct version by entering the following command and observing the output: USD oc get kibana kibana -o json Verify that the output includes a Kibana pod with the ready status: Example 8.2. Sample output with a ready Kibana pod [ { "clusterCondition": { "kibana-5fdd766ffd-nb2jj": [ { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" }, { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" } ] }, "deployment": "kibana", "pods": { "failed": [], "notReady": [] "ready": [] }, "replicaSets": [ "kibana-5fdd766ffd" ], "replicas": 1 } ] | [
"oc -n openshift-logging delete subscription <subscription>",
"oc -n openshift-logging delete operatorgroup <operator_group_name>",
"oc delete clusterserviceversion cluster-logging.<version>",
"oc get operatorgroup <operator_group_name> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack spec: storage: schemas: # version: v12 1 - effectiveDate: \"<yyyy>-<mm>-<future_dd>\" 2 version: v13",
"oc edit lokistack <name> -n openshift-logging",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/cluster-logging-upgrading |
18.2.2. Specifying the Mount Options | 18.2.2. Specifying the Mount Options To specify additional mount options, use the command in the following form: When supplying multiple options, do not insert a space after a comma, or mount will incorrectly interpret the values following spaces as additional parameters. Table 18.2, "Common Mount Options" provides a list of common mount options. For a complete list of all available options, consult the relevant manual page as referred to in Section 18.4.1, "Manual Page Documentation" . Table 18.2. Common Mount Options Option Description async Allows the asynchronous input/output operations on the file system. auto Allows the file system to be mounted automatically using the mount -a command. defaults Provides an alias for async,auto,dev,exec,nouser,rw,suid . exec Allows the execution of binary files on the particular file system. loop Mounts an image as a loop device. noauto Default behavior disallows the automatic mount of the file system using the mount -a command. noexec Disallows the execution of binary files on the particular file system. nouser Disallows an ordinary user (that is, other than root ) to mount and unmount the file system. remount Remounts the file system in case it is already mounted. ro Mounts the file system for reading only. rw Mounts the file system for both reading and writing. user Allows an ordinary user (that is, other than root ) to mount and unmount the file system. See Example 18.3, "Mounting an ISO Image" for an example usage. Example 18.3. Mounting an ISO Image An ISO image (or a disk image in general) can be mounted by using the loop device. Assuming that the ISO image of the Fedora 14 installation disc is present in the current working directory and that the /media/cdrom/ directory exists, mount the image to this directory by running the following command as root : Note that ISO 9660 is by design a read-only file system. | [
"mount -o options device directory",
"~]# mount -o ro,loop Fedora-14-x86_64-Live-Desktop.iso /media/cdrom"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/sect-using_the_mount_command-mounting-options |
Chapter 2. Understanding OpenShift Pipelines | Chapter 2. Understanding OpenShift Pipelines Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions. 2.1. Key features Red Hat OpenShift Pipelines is a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers. Red Hat OpenShift Pipelines are designed for decentralized teams that work on microservice-based architecture. Red Hat OpenShift Pipelines use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand. You can use Red Hat OpenShift Pipelines to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform. You can use the OpenShift Container Platform Developer console to create Tekton resources, view logs of pipeline runs, and manage pipelines in your OpenShift Container Platform namespaces. 2.2. OpenShift Pipelines Concepts This guide provides a detailed view of the various pipeline concepts. 2.2.1. Tasks Task resources are the building blocks of a pipeline and consist of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. Tasks are reusable and can be used in multiple pipelines. Steps are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs as a container within that pod. Because steps run within the same pod, they can access the same volumes for caching files, config maps, and secrets. The following example shows the apply-manifests task. apiVersion: tekton.dev/v1 1 kind: Task 2 metadata: name: apply-manifests 3 spec: 4 workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: "k8s" steps: - name: apply image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest workingDir: /workspace/source command: ["/bin/bash", "-c"] args: - |- echo Applying manifests in USD(params.manifest_dir) directory oc apply -f USD(params.manifest_dir) echo ----------------------------------- 1 The task API version, v1 . 2 The type of Kubernetes object, Task . 3 The unique name of this task. 4 The list of parameters and steps in the task and the workspace used by the task. This task starts the pod and runs a container inside that pod using the specified image to run the specified commands. Note Starting with OpenShift Pipelines 1.6, the following defaults from the step YAML file are removed: The HOME environment variable does not default to the /tekton/home directory The workingDir field does not default to the /workspace directory Instead, the container for the step defines the HOME environment variable and the workingDir field. However, you can override the default values by specifying the custom values in the YAML file for the step. As a temporary measure, to maintain backward compatibility with the older OpenShift Pipelines versions, you can set the following fields in the TektonConfig custom resource definition to false : 2.2.2. When expression When expressions guard task execution by setting criteria for the execution of tasks within a pipeline. They contain a list of components that allows a task to run only when certain criteria are met. When expressions are also supported in the final set of tasks that are specified using the finally field in the pipeline YAML file. The key components of a when expression are as follows: input : Specifies static inputs or variables such as a parameter, task result, and execution status. You must enter a valid input. If you do not enter a valid input, its value defaults to an empty string. operator : Specifies the relationship of an input to a set of values . Enter in or notin as your operator values. values : Specifies an array of string values. Enter a non-empty array of static values or variables such as parameters, results, and a bound state of a workspace. The declared when expressions are evaluated before the task is run. If the value of a when expression is True , the task is run. If the value of a when expression is False , the task is skipped. You can use the when expressions in various use cases. For example, whether: The result of a task is as expected. A file in a Git repository has changed in the commits. An image exists in the registry. An optional workspace is available. The following example shows the when expressions for a pipeline run. The pipeline run will execute the create-file task only if the following criteria are met: the path parameter is README.md , and the echo-file-exists task executed only if the exists result from the check-file task is yes . apiVersion: tekton.dev/v1 kind: PipelineRun 1 metadata: generateName: guarded-pr- spec: taskRunTemplate: serviceAccountName: pipeline pipelineSpec: params: - name: path type: string description: The path of the file to be created workspaces: - name: source description: | This workspace is shared among all the pipeline tasks to read/write common resources tasks: - name: create-file 2 when: - input: "USD(params.path)" operator: in values: ["README.md"] workspaces: - name: source workspace: source taskSpec: workspaces: - name: source description: The workspace to create the readme file in steps: - name: write-new-stuff image: ubuntu script: 'touch USD(workspaces.source.path)/README.md' - name: check-file params: - name: path value: "USD(params.path)" workspaces: - name: source workspace: source runAfter: - create-file taskSpec: params: - name: path workspaces: - name: source description: The workspace to check for the file results: - name: exists description: indicates whether the file exists or is missing steps: - name: check-file image: alpine script: | if test -f USD(workspaces.source.path)/USD(params.path); then printf yes | tee /tekton/results/exists else printf no | tee /tekton/results/exists fi - name: echo-file-exists when: 3 - input: "USD(tasks.check-file.results.exists)" operator: in values: ["yes"] taskSpec: steps: - name: echo image: ubuntu script: 'echo file exists' ... - name: task-should-be-skipped-1 when: 4 - input: "USD(params.path)" operator: notin values: ["README.md"] taskSpec: steps: - name: echo image: ubuntu script: exit 1 ... finally: - name: finally-task-should-be-executed when: 5 - input: "USD(tasks.echo-file-exists.status)" operator: in values: ["Succeeded"] - input: "USD(tasks.status)" operator: in values: ["Succeeded"] - input: "USD(tasks.check-file.results.exists)" operator: in values: ["yes"] - input: "USD(params.path)" operator: in values: ["README.md"] taskSpec: steps: - name: echo image: ubuntu script: 'echo finally done' params: - name: path value: README.md workspaces: - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Mi 1 Specifies the type of Kubernetes object. In this example, PipelineRun . 2 Task create-file used in the pipeline. 3 when expression that specifies to execute the echo-file-exists task only if the exists result from the check-file task is yes . 4 when expression that specifies to skip the task-should-be-skipped-1 task only if the path parameter is README.md . 5 when expression that specifies to execute the finally-task-should-be-executed task only if the execution status of the echo-file-exists task and the task status is Succeeded , the exists result from the check-file task is yes , and the path parameter is README.md . The Pipeline Run details page of the OpenShift Container Platform web console shows the status of the tasks and when expressions as follows: All the criteria are met: Tasks and the when expression symbol, which is represented by a diamond shape are green. Any one of the criteria are not met: Task is skipped. Skipped tasks and the when expression symbol are grey. None of the criteria are met: Task is skipped. Skipped tasks and the when expression symbol are grey. Task run fails: Failed tasks and the when expression symbol are red. 2.2.3. Finally tasks The finally tasks are the final set of tasks specified using the finally field in the pipeline YAML file. A finally task always executes the tasks within the pipeline, irrespective of whether the pipeline runs are executed successfully. The finally tasks are executed in parallel after all the pipeline tasks are run, before the corresponding pipeline exits. You can configure a finally task to consume the results of any task within the same pipeline. This approach does not change the order in which this final task is run. It is executed in parallel with other final tasks after all the non-final tasks are executed. The following example shows a code snippet of the clone-cleanup-workspace pipeline. This code clones the repository into a shared workspace and cleans up the workspace. After executing the pipeline tasks, the cleanup task specified in the finally section of the pipeline YAML file cleans up the workspace. apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: clone-cleanup-workspace 1 spec: workspaces: - name: git-source 2 tasks: - name: clone-app-repo 3 taskRef: name: git-clone-from-catalog params: - name: url value: https://github.com/tektoncd/community.git - name: subdirectory value: application workspaces: - name: output workspace: git-source finally: - name: cleanup 4 taskRef: 5 name: cleanup-workspace workspaces: 6 - name: source workspace: git-source - name: check-git-commit params: 7 - name: commit value: USD(tasks.clone-app-repo.results.commit) taskSpec: 8 params: - name: commit steps: - name: check-commit-initialized image: alpine script: | if [[ ! USD(params.commit) ]]; then exit 1 fi 1 Unique name of the pipeline. 2 The shared workspace where the git repository is cloned. 3 The task to clone the application repository to the shared workspace. 4 The task to clean-up the shared workspace. 5 A reference to the task that is to be executed in the task run. 6 A shared storage volume that a task in a pipeline needs at runtime to receive input or provide output. 7 A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value. 8 Embedded task definition. 2.2.4. TaskRun A TaskRun instantiates a task for execution with specific inputs, outputs, and execution parameters on a cluster. It can be invoked on its own or as part of a pipeline run for each task in a pipeline. A task consists of one or more steps that execute container images, and each container image performs a specific piece of build work. A task run executes the steps in a task in the specified order, until all steps execute successfully or a failure occurs. A TaskRun is automatically created by a PipelineRun for each task in a pipeline. The following example shows a task run that runs the apply-manifests task with the relevant input parameters: apiVersion: tekton.dev/v1 1 kind: TaskRun 2 metadata: name: apply-manifests-taskrun 3 spec: 4 taskRunTemplate: serviceAccountName: pipeline taskRef: 5 kind: Task name: apply-manifests workspaces: 6 - name: source persistentVolumeClaim: claimName: source-pvc 1 The task run API version v1 . 2 Specifies the type of Kubernetes object. In this example, TaskRun . 3 Unique name to identify this task run. 4 Definition of the task run. For this task run, the task and the required workspace are specified. 5 Name of the task reference used for this task run. This task run executes the apply-manifests task. 6 Workspace used by the task run. 2.2.5. Pipelines A Pipeline is a collection of Task resources arranged in a specific order of execution. They are executed to construct complex workflows that automate the build, deployment and delivery of applications. You can define a CI/CD workflow for your application using pipelines containing one or more tasks. A Pipeline resource definition consists of a number of fields or attributes, which together enable the pipeline to accomplish a specific goal. Each Pipeline resource definition must contain at least one Task resource, which ingests specific inputs and produces specific outputs. The pipeline definition can also optionally include Conditions , Workspaces , Parameters , or Resources depending on the application requirements. The following example shows the build-and-deploy pipeline, which builds an application image from a Git repository using the buildah task provided in the openshift-pipelines namespace: apiVersion: tekton.dev/v1 1 kind: Pipeline 2 metadata: name: build-and-deploy 3 spec: 4 workspaces: 5 - name: shared-workspace params: 6 - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: "pipelines-1.18" - name: IMAGE type: string description: image to be built from the code tasks: 7 - name: fetch-repository taskRef: resolver: cluster params: - name: kind value: task - name: name value: git-clone - name: namespace value: openshift-pipelines workspaces: - name: output workspace: shared-workspace params: - name: URL value: USD(params.git-url) - name: SUBDIRECTORY value: "" - name: DELETE_EXISTING value: "true" - name: REVISION value: USD(params.git-revision) - name: build-image 8 taskRef: resolver: cluster params: - name: kind value: task - name: name value: buildah - name: namespace value: openshift-pipelines workspaces: - name: source workspace: shared-workspace params: - name: TLSVERIFY value: "false" - name: IMAGE value: USD(params.IMAGE) runAfter: - fetch-repository - name: apply-manifests 9 taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: 10 - build-image - name: update-deployment taskRef: name: update-deployment workspaces: - name: source workspace: shared-workspace params: - name: deployment value: USD(params.deployment-name) - name: IMAGE value: USD(params.IMAGE) runAfter: - apply-manifests 1 Pipeline API version v1 . 2 Specifies the type of Kubernetes object. In this example, Pipeline . 3 Unique name of this pipeline. 4 Specifies the definition and structure of the pipeline. 5 Workspaces used across all the tasks in the pipeline. 6 Parameters used across all the tasks in the pipeline. 7 Specifies the list of tasks used in the pipeline. 8 Task build-image , which uses the buildah task provided in the openshift-pipelines namespace to build application images from a given Git repository. 9 Task apply-manifests , which uses a user-defined task with the same name. 10 Specifies the sequence in which tasks are run in a pipeline. In this example, the apply-manifests task is run only after the build-image task is completed. Note The Red Hat OpenShift Pipelines Operator installs the Buildah task in the openshift-pipelines namespace and creates the pipeline service account with sufficient permission to build and push an image. The Buildah task can fail when associated with a different service account with insufficient permissions. 2.2.6. PipelineRun A PipelineRun is a type of resource that binds a pipeline, workspaces, credentials, and a set of parameter values specific to a scenario to run the CI/CD workflow. A pipeline run is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a task run for each task in the pipeline run. The pipeline runs the tasks sequentially until they are complete or a task fails. The status field tracks and the progress of each task run and stores it for monitoring and auditing purposes. The following example runs the build-and-deploy pipeline with relevant resources and parameters: apiVersion: tekton.dev/v1 1 kind: PipelineRun 2 metadata: name: build-deploy-api-pipelinerun 3 spec: pipelineRef: name: build-and-deploy 4 params: 5 - name: deployment-name value: vote-api - name: git-url value: https://github.com/openshift-pipelines/vote-api.git - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api workspaces: 6 - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi 1 Pipeline run API version v1 . 2 The type of Kubernetes object. In this example, PipelineRun . 3 Unique name to identify this pipeline run. 4 Name of the pipeline to be run. In this example, build-and-deploy . 5 The list of parameters required to run the pipeline. 6 Workspace used by the pipeline run. Additional resources Authenticating pipelines with repositories using secrets 2.2.7. Pod templates Optionally, you can define a pod template in a PipelineRun or TaskRun custom resource (CR). You can use any parameters available for a Pod CR in the pod template. When creating pods for executing the pipeline or task, OpenShift Pipelines sets these parameters for every pod. For example, you can use a pod template to make the pod execute as a user and not as root. For a pipeline run, you can define a pod template in the pipelineRunTemplate.podTemplate spec, as in the following example: Example PipelineRun CR with a pod template apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: mypipelinerun spec: pipelineRef: name: mypipeline taskRunTemplate: podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001 Note In the earlier API version v1beta1 , the pod template for a PipelineRun CR was specified as podTemplate directly in the spec: section. This format is not supported in the v1 API. For a task run, you can define a pod template in the podTemplate spec, as in the following example: Example TaskRun CR with a pod template apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1 kind: TaskRun metadata: name: mytaskrun namespace: default spec: taskRef: name: mytask podTemplate: schedulerName: volcano securityContext: runAsNonRoot: true runAsUser: 1001 Additional resources Using pods 2.2.8. Workspaces Note It is recommended that you use workspaces instead of the PipelineResource CRs in Red Hat OpenShift Pipelines, as PipelineResource CRs are difficult to debug, limited in scope, and make tasks less reusable. Workspaces declare shared storage volumes that a task in a pipeline needs at runtime to receive input or provide output. Instead of specifying the actual location of the volumes, workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. A task or pipeline declares the workspace and you must provide the specific location details of the volume. It is then mounted into that workspace in a task run or a pipeline run. This separation of volume declaration from runtime storage volumes makes the tasks reusable, flexible, and independent of the user environment. With workspaces, you can: Store task inputs and outputs Share data among tasks Use it as a mount point for credentials held in secrets Use it as a mount point for configurations held in config maps Use it as a mount point for common tools shared by an organization Create a cache of build artifacts that speed up jobs You can specify workspaces in the TaskRun or PipelineRun using: A read-only config map or secret An existing persistent volume claim shared with other tasks A persistent volume claim from a provided volume claim template An emptyDir that is discarded when the task run completes The following example shows a code snippet of the build-and-deploy pipeline, which declares a shared-workspace workspace for the build-image and apply-manifests tasks as defined in the pipeline. apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: 1 - name: shared-workspace params: ... tasks: 2 - name: build-image taskRef: resolver: cluster params: - name: kind value: task - name: name value: buildah - name: namespace value: openshift-pipelines workspaces: 3 - name: source 4 workspace: shared-workspace 5 params: - name: TLSVERIFY value: "false" - name: IMAGE value: USD(params.IMAGE) runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: 6 - name: source workspace: shared-workspace runAfter: - build-image ... 1 List of workspaces shared between the tasks defined in the pipeline. A pipeline can define as many workspaces as required. In this example, only one workspace named shared-workspace is declared. 2 Definition of tasks used in the pipeline. This snippet defines two tasks, build-image and apply-manifests , which share a common workspace. 3 List of workspaces used in the build-image task. A task definition can include as many workspaces as it requires. However, it is recommended that a task uses at most one writable workspace. 4 Name that uniquely identifies the workspace used in the task. This task uses one workspace named source . 5 Name of the pipeline workspace used by the task. Note that the workspace source in turn uses the pipeline workspace named shared-workspace . 6 List of workspaces used in the apply-manifests task. Note that this task shares the source workspace with the build-image task. Workspaces help tasks share data, and allow you to specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or provide a volume claim template that creates a persistent volume claim for you. The following code snippet of the build-deploy-api-pipelinerun pipeline run uses a volume claim template to create a persistent volume claim for defining the storage volume for the shared-workspace workspace used in the build-and-deploy pipeline. apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: build-deploy-api-pipelinerun spec: pipelineRef: name: build-and-deploy params: ... workspaces: 1 - name: shared-workspace 2 volumeClaimTemplate: 3 spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi 1 Specifies the list of pipeline workspaces for which volume binding will be provided in the pipeline run. 2 The name of the workspace in the pipeline for which the volume is being provided. 3 Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace. 2.2.9. Step actions A step is a part of a task. If you define a step in a task, you cannot reference this step from another task. However, you can optionally define a step action in a StepAction custom resource (CR). This CR contains the action that a step performs. You can reference a StepAction object from a step to create a step that performs the action. You can also use resolvers to reference a StepAction definition that is available from an external source. The following examples shows a StepAction CR named apply-manifests-action . This step action applies manifests from a source tree to your OpenShift Container Platform environment: apiVersion: tekton.dev/v1 kind: StepAction metadata: name: apply-manifests-action spec: params: - name: working_dir description: The working directory where the source is located type: string 1 default: "/workspace/source" - name: manifest_dir description: The directory in source that contains yaml manifests default: "k8s" results: - name: output description: The output of the oc apply command image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest env: - name: MANIFEST_DIR value: USD(params.manifest_dir) workingDir: USD(params.working_dir) script: | #!/usr/bin/env bash oc apply -f "USDMANIFEST_DIR" | tee USD(results.output) 1 The type specification for a parameter is optional. The StepAction CR does not include definitions of workspaces. Instead, the step action expects that the task that includes the action also provides the mounted source tree, typically using a workspace. A StepAction object can define parameters and results. When you reference this object, you must specify the values for the parameters of the StepAction object in the definition of the step. The results of the StepAction object automatically become the results of the step. Important To avoid malicious attacks that use the shell, the StepAction CR does not support using parameter values in a script value. Instead, you must use the env: section to define environment variables that contain the parameter values. The following example task includes a step that references the apply-manifests-action step action, provides the necessary parameters, and uses the result: apiVersion: tekton.dev/v1 kind: Task metadata: name: apply-manifests-with-action spec: workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: "k8s" steps: - name: apply ref: name: apply-manifests-action params: - name: working_dir value: "/workspace/source" - name: manifest_dir value: USD(params.manifest_dir) - name: display_result script: 'echo USD(step.apply.results.output)' Additional resources Specifying remote pipelines, tasks, and step actions using resolvers 2.2.10. Triggers Use Triggers in conjunction with pipelines to create a full-fledged CI/CD system where Kubernetes resources define the entire CI/CD execution. Triggers capture the external events, such as a Git pull request, and process them to extract key pieces of information. Mapping this event data to a set of predefined parameters triggers a series of tasks that can then create and deploy Kubernetes resources and instantiate the pipeline. For example, you define a CI/CD workflow using Red Hat OpenShift Pipelines for your application. The pipeline must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change event and by triggering a pipeline run that deploys the new image with the latest changes. Triggers consist of the following main resources that work together to form a reusable, decoupled, and self-sustaining CI/CD system: The TriggerBinding resource extracts the fields from an event payload and stores them as parameters. The following example shows a code snippet of the TriggerBinding resource, which extracts the Git repository information from the received event payload: apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerBinding 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url value: USD(body.repository.url) - name: git-repo-name value: USD(body.repository.name) - name: git-revision value: USD(body.head_commit.id) 1 The API version of the TriggerBinding resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, TriggerBinding . 3 Unique name to identify the TriggerBinding resource. 4 List of parameters which will be extracted from the received event payload and passed to the TriggerTemplate resource. In this example, the Git repository URL, name, and revision are extracted from the body of the event payload. The TriggerTemplate resource acts as a standard for the way resources must be created. It specifies the way parameterized data from the TriggerBinding resource should be used. A trigger template receives input from the trigger binding, and then performs a series of actions that results in creation of new pipeline resources, and initiation of a new pipeline run. The following example shows a code snippet of a TriggerTemplate resource, which creates a pipeline run using the Git repository information received from the TriggerBinding resource you just created: apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerTemplate 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.18 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: 5 - apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: build-deploy-USD(tt.params.git-repo-name)-USD(uid) spec: taskRunTemplate: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: USD(tt.params.git-repo-name) - name: git-url value: USD(tt.params.git-repo-url) - name: git-revision value: USD(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/USD(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi 1 The API version of the TriggerTemplate resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, TriggerTemplate . 3 Unique name to identify the TriggerTemplate resource. 4 Parameters supplied by the TriggerBinding resource. 5 List of templates that specify the way resources must be created using the parameters received through the TriggerBinding or EventListener resources. The Trigger resource combines the TriggerBinding and TriggerTemplate resources, and optionally, the interceptors event processor. Interceptors process all the events for a specific platform that runs before the TriggerBinding resource. You can use interceptors to filter the payload, verify events, define and test trigger conditions, and implement other useful processing. Interceptors use secret for event verification. After the event data passes through an interceptor, it then goes to the trigger before you pass the payload data to the trigger binding. You can also use an interceptor to modify the behavior of the associated trigger referenced in the EventListener specification. The following example shows a code snippet of a Trigger resource, named vote-trigger that connects the TriggerBinding and TriggerTemplate resources, and the interceptors event processor. apiVersion: triggers.tekton.dev/v1beta1 1 kind: Trigger 2 metadata: name: vote-trigger 3 spec: taskRunTemplate: serviceAccountName: pipeline 4 interceptors: - ref: name: "github" 5 params: 6 - name: "secretRef" value: secretName: github-secret secretKey: secretToken - name: "eventTypes" value: ["push"] bindings: - ref: vote-app 7 template: 8 ref: vote-app --- apiVersion: v1 kind: Secret 9 metadata: name: github-secret type: Opaque stringData: secretToken: "1234567" 1 The API version of the Trigger resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, Trigger . 3 Unique name to identify the Trigger resource. 4 Service account name to be used. 5 Interceptor name to be referenced. In this example, github . 6 Desired parameters to be specified. 7 Name of the TriggerBinding resource to be connected to the TriggerTemplate resource. 8 Name of the TriggerTemplate resource to be connected to the TriggerBinding resource. 9 Secret to be used to verify events. The EventListener resource provides an endpoint, or an event sink, that listens for incoming HTTP-based events with a JSON payload. It extracts event parameters from each TriggerBinding resource, and then processes this data to create Kubernetes resources as specified by the corresponding TriggerTemplate resource. The EventListener resource also performs lightweight event processing or basic filtering on the payload using event interceptors , which identify the type of payload and optionally modify it. Currently, pipeline triggers support five types of interceptors: Webhook Interceptors , GitHub Interceptors , GitLab Interceptors , Bitbucket Interceptors , and Common Expression Language (CEL) Interceptors . The following example shows an EventListener resource, which references the Trigger resource named vote-trigger . apiVersion: triggers.tekton.dev/v1beta1 1 kind: EventListener 2 metadata: name: vote-app 3 spec: taskRunTemplate: serviceAccountName: pipeline 4 triggers: - triggerRef: vote-trigger 5 1 The API version of the EventListener resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, EventListener . 3 Unique name to identify the EventListener resource. 4 Service account name to be used. 5 Name of the Trigger resource referenced by the EventListener resource. 2.3. Additional resources For information on installing OpenShift Pipelines, see Installing OpenShift Pipelines . For more details on creating custom CI/CD solutions, see Creating CI/CD solutions for applications using OpenShift Pipelines . For more details on re-encrypt TLS termination, see Re-encryption Termination . For more details on secured routes, see the Secured routes section. | [
"apiVersion: tekton.dev/v1 1 kind: Task 2 metadata: name: apply-manifests 3 spec: 4 workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: \"k8s\" steps: - name: apply image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest workingDir: /workspace/source command: [\"/bin/bash\", \"-c\"] args: - |- echo Applying manifests in USD(params.manifest_dir) directory oc apply -f USD(params.manifest_dir) echo -----------------------------------",
"spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false",
"apiVersion: tekton.dev/v1 kind: PipelineRun 1 metadata: generateName: guarded-pr- spec: taskRunTemplate: serviceAccountName: pipeline pipelineSpec: params: - name: path type: string description: The path of the file to be created workspaces: - name: source description: | This workspace is shared among all the pipeline tasks to read/write common resources tasks: - name: create-file 2 when: - input: \"USD(params.path)\" operator: in values: [\"README.md\"] workspaces: - name: source workspace: source taskSpec: workspaces: - name: source description: The workspace to create the readme file in steps: - name: write-new-stuff image: ubuntu script: 'touch USD(workspaces.source.path)/README.md' - name: check-file params: - name: path value: \"USD(params.path)\" workspaces: - name: source workspace: source runAfter: - create-file taskSpec: params: - name: path workspaces: - name: source description: The workspace to check for the file results: - name: exists description: indicates whether the file exists or is missing steps: - name: check-file image: alpine script: | if test -f USD(workspaces.source.path)/USD(params.path); then printf yes | tee /tekton/results/exists else printf no | tee /tekton/results/exists fi - name: echo-file-exists when: 3 - input: \"USD(tasks.check-file.results.exists)\" operator: in values: [\"yes\"] taskSpec: steps: - name: echo image: ubuntu script: 'echo file exists' - name: task-should-be-skipped-1 when: 4 - input: \"USD(params.path)\" operator: notin values: [\"README.md\"] taskSpec: steps: - name: echo image: ubuntu script: exit 1 finally: - name: finally-task-should-be-executed when: 5 - input: \"USD(tasks.echo-file-exists.status)\" operator: in values: [\"Succeeded\"] - input: \"USD(tasks.status)\" operator: in values: [\"Succeeded\"] - input: \"USD(tasks.check-file.results.exists)\" operator: in values: [\"yes\"] - input: \"USD(params.path)\" operator: in values: [\"README.md\"] taskSpec: steps: - name: echo image: ubuntu script: 'echo finally done' params: - name: path value: README.md workspaces: - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Mi",
"apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: clone-cleanup-workspace 1 spec: workspaces: - name: git-source 2 tasks: - name: clone-app-repo 3 taskRef: name: git-clone-from-catalog params: - name: url value: https://github.com/tektoncd/community.git - name: subdirectory value: application workspaces: - name: output workspace: git-source finally: - name: cleanup 4 taskRef: 5 name: cleanup-workspace workspaces: 6 - name: source workspace: git-source - name: check-git-commit params: 7 - name: commit value: USD(tasks.clone-app-repo.results.commit) taskSpec: 8 params: - name: commit steps: - name: check-commit-initialized image: alpine script: | if [[ ! USD(params.commit) ]]; then exit 1 fi",
"apiVersion: tekton.dev/v1 1 kind: TaskRun 2 metadata: name: apply-manifests-taskrun 3 spec: 4 taskRunTemplate: serviceAccountName: pipeline taskRef: 5 kind: Task name: apply-manifests workspaces: 6 - name: source persistentVolumeClaim: claimName: source-pvc",
"apiVersion: tekton.dev/v1 1 kind: Pipeline 2 metadata: name: build-and-deploy 3 spec: 4 workspaces: 5 - name: shared-workspace params: 6 - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: \"pipelines-1.18\" - name: IMAGE type: string description: image to be built from the code tasks: 7 - name: fetch-repository taskRef: resolver: cluster params: - name: kind value: task - name: name value: git-clone - name: namespace value: openshift-pipelines workspaces: - name: output workspace: shared-workspace params: - name: URL value: USD(params.git-url) - name: SUBDIRECTORY value: \"\" - name: DELETE_EXISTING value: \"true\" - name: REVISION value: USD(params.git-revision) - name: build-image 8 taskRef: resolver: cluster params: - name: kind value: task - name: name value: buildah - name: namespace value: openshift-pipelines workspaces: - name: source workspace: shared-workspace params: - name: TLSVERIFY value: \"false\" - name: IMAGE value: USD(params.IMAGE) runAfter: - fetch-repository - name: apply-manifests 9 taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: 10 - build-image - name: update-deployment taskRef: name: update-deployment workspaces: - name: source workspace: shared-workspace params: - name: deployment value: USD(params.deployment-name) - name: IMAGE value: USD(params.IMAGE) runAfter: - apply-manifests",
"apiVersion: tekton.dev/v1 1 kind: PipelineRun 2 metadata: name: build-deploy-api-pipelinerun 3 spec: pipelineRef: name: build-and-deploy 4 params: 5 - name: deployment-name value: vote-api - name: git-url value: https://github.com/openshift-pipelines/vote-api.git - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api workspaces: 6 - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: mypipelinerun spec: pipelineRef: name: mypipeline taskRunTemplate: podTemplate: securityContext: runAsNonRoot: true runAsUser: 1001",
"apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1 kind: TaskRun metadata: name: mytaskrun namespace: default spec: taskRef: name: mytask podTemplate: schedulerName: volcano securityContext: runAsNonRoot: true runAsUser: 1001",
"apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: 1 - name: shared-workspace params: tasks: 2 - name: build-image taskRef: resolver: cluster params: - name: kind value: task - name: name value: buildah - name: namespace value: openshift-pipelines workspaces: 3 - name: source 4 workspace: shared-workspace 5 params: - name: TLSVERIFY value: \"false\" - name: IMAGE value: USD(params.IMAGE) runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: 6 - name: source workspace: shared-workspace runAfter: - build-image",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: build-deploy-api-pipelinerun spec: pipelineRef: name: build-and-deploy params: workspaces: 1 - name: shared-workspace 2 volumeClaimTemplate: 3 spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"apiVersion: tekton.dev/v1 kind: StepAction metadata: name: apply-manifests-action spec: params: - name: working_dir description: The working directory where the source is located type: string 1 default: \"/workspace/source\" - name: manifest_dir description: The directory in source that contains yaml manifests default: \"k8s\" results: - name: output description: The output of the oc apply command image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest env: - name: MANIFEST_DIR value: USD(params.manifest_dir) workingDir: USD(params.working_dir) script: | #!/usr/bin/env bash oc apply -f \"USDMANIFEST_DIR\" | tee USD(results.output)",
"apiVersion: tekton.dev/v1 kind: Task metadata: name: apply-manifests-with-action spec: workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: \"k8s\" steps: - name: apply ref: name: apply-manifests-action params: - name: working_dir value: \"/workspace/source\" - name: manifest_dir value: USD(params.manifest_dir) - name: display_result script: 'echo USD(step.apply.results.output)'",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerBinding 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url value: USD(body.repository.url) - name: git-repo-name value: USD(body.repository.name) - name: git-revision value: USD(body.head_commit.id)",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerTemplate 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.18 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: 5 - apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: build-deploy-USD(tt.params.git-repo-name)-USD(uid) spec: taskRunTemplate: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: USD(tt.params.git-repo-name) - name: git-url value: USD(tt.params.git-repo-url) - name: git-revision value: USD(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/USD(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: Trigger 2 metadata: name: vote-trigger 3 spec: taskRunTemplate: serviceAccountName: pipeline 4 interceptors: - ref: name: \"github\" 5 params: 6 - name: \"secretRef\" value: secretName: github-secret secretKey: secretToken - name: \"eventTypes\" value: [\"push\"] bindings: - ref: vote-app 7 template: 8 ref: vote-app --- apiVersion: v1 kind: Secret 9 metadata: name: github-secret type: Opaque stringData: secretToken: \"1234567\"",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: EventListener 2 metadata: name: vote-app 3 spec: taskRunTemplate: serviceAccountName: pipeline 4 triggers: - triggerRef: vote-trigger 5"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/about_openshift_pipelines/understanding-openshift-pipelines |
probe::vm.kmalloc_node | probe::vm.kmalloc_node Name probe::vm.kmalloc_node - Fires when kmalloc_node is requested. Synopsis Values ptr Pointer to the kmemory allocated caller_function Name of the caller function. call_site Address of the function caling this kmemory function. gfp_flag_name Type of kmemory to allocate(in string format) name Name of the probe point bytes_req Requested Bytes bytes_alloc Allocated Bytes gfp_flags type of kmemory to allocate | [
"vm.kmalloc_node"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-vm-kmalloc-node |
Chapter 4. Installing and configuring the logs service | Chapter 4. Installing and configuring the logs service Red Hat OpenStack Platform (RHOSP) writes informational messages to specific log files; you can use these messages for troubleshooting and monitoring system events. The log collection agent Rsyslog collects logs on the client side and sends these logs to an instance of Rsyslog that is running on the server side. The server-side Rsyslog instance redirects log records to Elasticsearch for storage. Note You do not need to attach the individual log files to your support cases manually. The sosreport utility gathers the required logs automatically. 4.1. The centralized log system architecture and components Monitoring tools use a client-server model with the client deployed onto the Red Hat OpenStack Platform (RHOSP) overcloud nodes. The Rsyslog service provides client-side centralized logging (CL). All RHOSP services generate and update log files. These log files record actions, errors, warnings, and other events. In a distributed environment like OpenStack, collecting these logs in a central location simplifies debugging and administration. With centralized logging, there is one central place to view logs across your entire RHOSP environment. These logs come from the operating system, such as syslog and audit log files, infrastructure components, such as RabbitMQ and MariaDB, and OpenStack services such as Identity, Compute, and others. The centralized logging toolchain consists of the following components: Log Collection Agent (Rsyslog) Data Store (ElasticSearch) API/Presentation Layer (Grafana) Note Red Hat OpenStack Platform director does not deploy the server-side components for centralized logging. Red Hat does not support the server-side components, including the Elasticsearch database and Grafana. RHOSP 16.2 only supports rsyslog with Elasticsearch version 7. 4.2. Enabling centralized logging with Elasticsearch To enable centralized logging, you must specify the implementation of the OS::TripleO::Services::Rsyslog composable service. Note The Rsyslog service uses only Elasticsearch as a data store for centralized logging. Prerequisites Elasticsearch is installed on the server side. Procedure Add the file path of the logging environment file to the overcloud deployment command with any other environment files that are relevant to your environment and deploy, as shown in the following example: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. 4.3. Configuring logging features To configure logging features, modify the RsyslogElasticsearchSetting parameter in the logging-environment-rsyslog.yaml file. Procedure Copy the tripleo-heat-templates/environments/logging-environment-rsyslog.yaml file to your home directory. Create entries in the RsyslogElasticsearchSetting parameter to suit your environment. The following snippet is an example configuration of the RsyslogElasticsearchSetting parameter: Additional resources For more information about the configurable parameters, see Section 4.3.1, "Configurable logging parameters" . 4.3.1. Configurable logging parameters This table contains descriptions of logging parameters that you use to configure logging features in Red Hat OpenStack Platform (RHOSP). You can find these parameters in the tripleo-heat-templates/deployment/logging/rsyslog-container-puppet.yaml file. Table 4.1. Configurable logging parameters Parameter Description RsyslogElasticsearchSetting Configuration for rsyslog-elasticsearch plugin. For more information, see https://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html . RsyslogElasticsearchTlsCACert Contains the content of the CA cert for the CA that issued the Elasticsearch server cert. RsyslogElasticsearchTlsClientCert Contains the content of the client cert for doing client cert authorization against Elasticsearch. RsyslogElasticsearchTlsClientKey Contains the content of the private key corresponding to the cert RsyslogElasticsearchTlsClientCert . 4.4. Overriding the default path for a log file If you modify the default containers and the modification includes the path to the service log file, you must also modify the default log file path. Every composable service has a <service_name>LoggingSource parameter. For example, for the nova-compute service, the parameter is NovaComputeLoggingSource . Procedure To override the default path for the nova-compute service, add the path to the NovaComputeLoggingSource parameter in your configuration file: Note For each service, define the tag and file . Other values are derived by default. You can modify the format for a specific service. This passes directly to the Rsyslog configuration. The default format for the LoggingDefaultFormat parameter is /(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d+) (?<pid>\d+) (?<priority>\S+) (?<message>.*)USD/ Use the following syntax: The following snippet is an example of a more complex transformation: 4.5. Modifying the format of a log record You can modify the format of the start of the log record for a specific service. This passes directly to the Rsyslog configuration. The default format for the Red Hat OpenStack Platform (RHOSP) log record is ('^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ [0-9]+)? (DEBUG|INFO|WARNING|ERROR) '). Procedure To add a different regular expression for parsing the start of log records, add startmsg.regex to the configuration: 4.6. Testing the connection between Rsyslog and Elasticsearch On the client side, you can verify communication between Rsyslog and Elasticsearch. Procedure Navigate to the Elasticsearch connection log file, /var/log/rsyslog/omelasticsearch.log in the Rsyslog container or /var/log/containers/rsyslog/omelasticsearch.log on the host. If this log file does not exist or if the log file exists but does not contain logs, there is no connection problem. If the log file is present and contains logs, Rsyslog has not connected successfully. Note To test the connection from the server side, view the Elasticsearch logs for connection issues. 4.7. Server-side logging If you have an Elasticsearch cluster running, you must configure the RsyslogElasticsearchSetting parameter in the logging-environment-rsyslog.yaml file to connect Rsyslog that is running on overcloud nodes. To configure the RsyslogElasticsearchSetting parameter, see https://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html 4.8. Tracebacks When you encounter an issue and you start troubleshooting, you can use a traceback log to diagnose the issue. In log files, tracebacks usually have several lines of information, all relating to the same issue. Rsyslog provides a regular expression to define how a log record starts. Each log record usually starts with a timestamp and the first line of the traceback is the only line that contains this information. Rsyslog bundles the indented records with the first line and sends them as one log record. For that behaviour configuration option startmsg.regex in <Service>LoggingSource is used. The following regular expression is the default value for all <service>LoggingSource parameters in director: When this default does not match log records of your added or modified LoggingSource , you must change startmsg.regex accordingly. 4.9. Location of log files for OpenStack services Each OpenStack component has a separate logging directory containing files specific to a running service. 4.9.1. Bare Metal Provisioning (ironic) log files Service Service name Log path OpenStack Ironic API openstack-ironic-api.service /var/log/containers/ironic/ironic-api.log OpenStack Ironic Conductor openstack-ironic-conductor.service /var/log/containers/ironic/ironic-conductor.log 4.9.2. Block Storage (cinder) log files Service Service name Log path Block Storage API openstack-cinder-api.service /var/log/containers/cinder-api.log Block Storage Backup openstack-cinder-backup.service /var/log/containers/cinder/backup.log Informational messages The cinder-manage command /var/log/containers/cinder/cinder-manage.log Block Storage Scheduler openstack-cinder-scheduler.service /var/log/containers/cinder/scheduler.log Block Storage Volume openstack-cinder-volume.service /var/log/containers/cinder/volume.log 4.9.3. Compute (nova) log files Service Service name Log path OpenStack Compute API service openstack-nova-api.service /var/log/containers/nova/nova-api.log OpenStack Compute certificate server openstack-nova-cert.service /var/log/containers/nova/nova-cert.log OpenStack Compute service openstack-nova-compute.service /var/log/containers/nova/nova-compute.log OpenStack Compute Conductor service openstack-nova-conductor.service /var/log/containers/nova/nova-conductor.log OpenStack Compute VNC console authentication server openstack-nova-consoleauth.service /var/log/containers/nova/nova-consoleauth.log Informational messages nova-manage command /var/log/containers/nova/nova-manage.log OpenStack Compute NoVNC Proxy service openstack-nova-novncproxy.service /var/log/containers/nova/nova-novncproxy.log OpenStack Compute Scheduler service openstack-nova-scheduler.service /var/log/containers/nova/nova-scheduler.log 4.9.4. Dashboard (horizon) log files Service Service name Log path Log of certain user interactions Dashboard interface /var/log/containers/horizon/horizon.log The Apache HTTP server uses several additional log files for the Dashboard web interface, which you can access by using a web browser or command-line client, for example, keystone and nova. The log files in the following table can be helpful in tracking the use of the Dashboard and diagnosing faults: Purpose Log path All processed HTTP requests /var/log/containers/httpd/horizon_access.log HTTP errors /var/log/containers/httpd/horizon_error.log Admin-role API requests /var/log/containers/httpd/keystone_wsgi_admin_access.log Admin-role API errors /var/log/containers/httpd/keystone_wsgi_admin_error.log Member-role API requests /var/log/containers/httpd/keystone_wsgi_main_access.log Member-role API errors /var/log/containers/httpd/keystone_wsgi_main_error.log Note There is also /var/log/containers/httpd/default_error.log , which stores errors reported by other web services that are running on the same host. 4.9.5. Identity Service (keystone) log files Service Service name Log Path OpenStack Identity Service openstack-keystone.service /var/log/containers/keystone/keystone.log 4.9.6. Image Service (glance) log files Service Service name Log path OpenStack Image Service API server openstack-glance-api.service /var/log/containers/glance/api.log OpenStack Image Service Registry server openstack-glance-registry.service /var/log/containers/glance/registry.log 4.9.7. Networking (neutron) log files Service Service name Log path OpenStack Neutron DHCP Agent neutron-dhcp-agent.service /var/log/containers/neutron/dhcp-agent.log OpenStack Networking Layer 3 Agent neutron-l3-agent.service /var/log/containers/neutron/l3-agent.log Metadata agent service neutron-metadata-agent.service /var/log/containers/neutron/metadata-agent.log Metadata namespace proxy n/a /var/log/containers/neutron/neutron-ns-metadata-proxy- UUID .log Open vSwitch agent neutron-openvswitch-agent.service /var/log/containers/neutron/openvswitch-agent.log OpenStack Networking service neutron-server.service /var/log/containers/neutron/server.log 4.9.8. Object Storage (swift) log files OpenStack Object Storage sends logs to the system logging facility only. Note By default, all Object Storage log files go to /var/log/containers/swift/swift.log , using the local0, local1, and local2 syslog facilities. The log messages of Object Storage are classified into two broad categories: those by REST API services and those by background daemons. The API service messages contain one line per API request, in a manner similar to popular HTTP servers; both the frontend (Proxy) and backend (Account, Container, Object) services post such messages. The daemon messages are less structured and typically contain human-readable information about daemons performing their periodic tasks. However, regardless of which part of Object Storage produces the message, the source identity is always at the beginning of the line. Here is an example of a proxy message: Here is an example of ad-hoc messages from background daemons: 4.9.9. Orchestration (heat) log files Service Service name Log path OpenStack Heat API Service openstack-heat-api.service /var/log/containers/heat/heat-api.log OpenStack Heat Engine Service openstack-heat-engine.service /var/log/containers/heat/heat-engine.log Orchestration service events n/a /var/log/containers/heat/heat-manage.log 4.9.10. Shared Filesystem Service (manila) log files Service Service name Log path OpenStack Manila API Server openstack-manila-api.service /var/log/containers/manila/api.log OpenStack Manila Scheduler openstack-manila-scheduler.service /var/log/containers/manila/scheduler.log OpenStack Manila Share Service openstack-manila-share.service /var/log/containers/manila/share.log Note Some information from the Manila Python library can also be logged in /var/log/containers/manila/manila-manage.log . 4.9.11. Telemetry (ceilometer) log files Service Service name Log path OpenStack ceilometer notification agent ceilometer_agent_notification /var/log/containers/ceilometer/agent-notification.log OpenStack ceilometer central agent ceilometer_agent_central /var/log/containers/ceilometer/central.log OpenStack ceilometer collection openstack-ceilometer-collector.service /var/log/containers/ceilometer/collector.log OpenStack ceilometer compute agent ceilometer_agent_compute /var/log/containers/ceilometer/compute.log 4.9.12. Log files for supporting services The following services are used by the core OpenStack components and have their own log directories and files. Service Service name Log path Message broker (RabbitMQ) rabbitmq-server.service /var/log/rabbitmq/rabbit@ short_hostname .log /var/log/rabbitmq/rabbit@ short_hostname -sasl.log (for Simple Authentication and Security Layer related log messages) Database server (MariaDB) mariadb.service /var/log/mariadb/mariadb.log Virtual network switch (Open vSwitch) openvswitch-nonetwork.service /var/log/openvswitch/ovsdb-server.log /var/log/openvswitch/ovs-vswitchd.log 4.9.13. aodh (alarming service) log files Service Container name Log path Alarming API aodh_api /var/log/containers/httpd/aodh-api/aodh_wsgi_access.log Alarm evaluator log aodh_evaluator /var/log/containers/aodh/aodh-evaluator.log Alarm listener aodh_listener /var/log/containers/aodh/aodh-listener.log Alarm notification aodh_notifier /var/log/containers/aodh/aodh-notifier.log 4.9.14. gnocchi (metric storage) log files Service Container name Log path Gnocchi API gnocchi_api /var/log/containers/httpd/gnocchi-api/gnocchi_wsgi_access.log Gnocchi metricd gnocchi_metricd /var/log/containers/gnocchi/gnocchi-metricd.log Gnocchi statsd gnocchi_statsd /var/log/containers/gnocchi/gnocchi-statsd.log | [
"openstack overcloud deploy <existing_overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment-rsyslog.yaml",
"parameter_defaults: RsyslogElasticsearchSetting: uid: \"elastic\" pwd: \"yourownpassword\" skipverifyhost: \"on\" allowunsignedcerts: \"on\" server: \"https://log-store-service-telemetry.apps.stfcloudops1.lab.upshift.rdu2.redhat.com\" serverport: 443",
"NovaComputeLoggingSource: tag: openstack.nova.compute file: /some/other/path/nova-compute.log",
"<service_name>LoggingSource: tag: <service_name>.tag path: <service_name>.path format: <service_name>.format",
"ServiceLoggingSource: tag: openstack.Service path: /var/log/containers/service/service.log format: multiline format_firstline: '/^\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}.\\d{3} \\d+ \\S+ \\S+ \\[(req-\\S+ \\S+ \\S+ \\S+ \\S+ \\S+|-)\\]/' format1: '/^(?<Timestamp>\\S+ \\S+) (?<Pid>\\d+) (?<log_level>\\S+) (?<python_module>\\S+) (\\[(req-(?<request_id>\\S+) (?<user_id>\\S+) (?<tenant_id>\\S+) (?<domain_id>\\S+) (?<user_domain>\\S+) (?<project_domain>\\S+)|-)\\])? (?<Payload>.*)?USD/'",
"NovaComputeLoggingSource: tag: openstack.nova.compute file: /some/other/path/nova-compute.log startmsg.regex: \"^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ \\\\+[0-9]+)? [A-Z]+ \\\\([a-z]+\\\\)",
"startmsg.regex='^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ [0-9]+)? (DEBUG|INFO|WARNING|ERROR) '",
"Apr 20 15:20:34 rhev-a24c-01 proxy-server: 127.0.0.1 127.0.0.1 20/Apr/2015/19/20/34 GET /v1/AUTH_zaitcev%3Fformat%3Djson%26marker%3Dtestcont HTTP/1.0 200 - python-swiftclient-2.1.0 AUTH_tk737d6... - 2 - txc454fa8ea4844d909820a-0055355182 - 0.0162 - - 1429557634.806570053 1429557634.822791100",
"Apr 27 17:08:15 rhev-a24c-02 object-auditor: Object audit (ZBF). Since Mon Apr 27 21:08:15 2015: Locally: 1 passed, 0 quarantined, 0 errors files/sec: 4.34 , bytes/sec: 0.00, Total time: 0.23, Auditing time: 0.00, Rate: 0.00 Apr 27 17:08:16 rhev-a24c-02 object-auditor: Object audit (ZBF) \"forever\" mode completed: 0.56s. Total quarantined: 0, Total errors: 0, Total files/sec: 14.31, Total bytes/sec: 0.00, Auditing time: 0.02, Rate: 0.04 Apr 27 17:08:16 rhev-a24c-02 account-replicator: Beginning replication run Apr 27 17:08:16 rhev-a24c-02 account-replicator: Replication run OVER Apr 27 17:08:16 rhev-a24c-02 account-replicator: Attempted to replicate 5 dbs in 0.12589 seconds (39.71876/s) Apr 27 17:08:16 rhev-a24c-02 account-replicator: Removed 0 dbs Apr 27 17:08:16 rhev-a24c-02 account-replicator: 10 successes, 0 failures"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/operational_measurements/installing-and-configuring-the-logs-service_assembly |
Preface | Preface You can use a host with a compatible graphics processing unit (GPU) to run virtual machines in Red Hat Virtualization that are suited for graphics-intensive tasks and for running software that cannot run without a GPU, such as CAD. You can assign a GPU to a virtual machine in one of the following ways: GPU passthrough : You can assign a host GPU to a single virtual machine, so the virtual machine, instead of the host, uses the GPU. Virtual GPU (vGPU) : You can divide a physical GPU device into one or more virtual devices, referred to as mediated devices . You can then assign these mediated devices to one or more virtual machines as virtual GPUs. These virtual machines share the performance of a single physical GPU. For some GPUs, only one mediated device can be assigned to a single guest. vGPU support is only available on selected NVIDIA GPUs. Example: A host has four GPUs. Each GPU can support up to 16 vGPUs, for a total of 64 vGPUs. Some possible vGPU assignments are: one virtual machine with 64 vGPUs 64 virtual machines, each with one vGPU 32 virtual machines, each with one vGPU; eight virtual machines, each with two vGPUs; 4 virtual machines, each with four vGPUs | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/setting_up_an_nvidia_gpu_for_a_virtual_machine_in_red_hat_virtualization/pr01 |
Installing on IBM Power | Installing on IBM Power OpenShift Container Platform 4.14 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_ibm_power/index |
Chapter 6. Installing the Migration Toolkit for Containers | Chapter 6. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4. After you install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.17 by using the Operator Lifecycle Manager, you manually install the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 6.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters using the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed, both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.17. MTC 1.8 only supports migrations from OpenShift Container Platform 4.14 and later. Table 6.1. MTC compatibility: Migrating from OpenShift Container Platform 3 to 4 Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.14 or later Stable MTC version MTC v.1.7. z MTC v.1.8. z Installation As described in this guide Install with OLM, release channel release-v1.8 Edge cases exist where network restrictions prevent OpenShift Container Platform 4 clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a OpenShift Container Platform 4 cluster in the cloud, the OpenShift Container Platform 4 cluster might have trouble connecting to the OpenShift Container Platform 3.11 cluster. In this case, it is possible to designate the OpenShift Container Platform 3.11 cluster as the control cluster and push workloads to the remote OpenShift Container Platform 4 cluster. 6.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi9 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 6.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.17 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.17 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 6.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.17, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 6.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 6.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 6.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 6.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 6.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 6.4.2.1. NetworkPolicy configuration 6.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 6.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 6.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 6.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 6.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 6.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 6.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 6.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The following storage providers are supported: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 6.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 6.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint, which you need to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for MTC. Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 6.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC) . Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 6.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 6.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` Create a service principal with the Contributor role, assigning a specific --role and --scopes : USD AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP` The CLI generates a password for you. Ensure you capture the password. After creating the service principal, obtain the client id. USD AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>` Note For this to be successful, you must know your Azure application ID. Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 6.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 6.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero') | [
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi9 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`",
"AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`",
"AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/installing-3-4 |
Chapter 8. Performing post-upgrade tasks | Chapter 8. Performing post-upgrade tasks The following major tasks are recommended after an in-place upgrade to RHEL 8. Prerequisites You have upgraded the system following the steps described in Performing the upgrade from RHEL 7 to RHEL 8 and you have been able to log in to RHEL 8. You have verified the status of the in-place upgrade following the steps described in Verifying the post-upgrade status of the RHEL 8 system . Procedure After performing the upgrade, complete the following tasks: Remove any remaining Leapp packages from the exclude list in the /etc/dnf/dnf.conf configuration file, including the snactor package. During the in-place upgrade, Leapp packages that were installed with the Leapp utility are automatically added to the exclude list to prevent critical files from being removed or updated. After the in-place upgrade, you must remove these Leapp packages from the exclude list before they can be removed from the system. To manually remove packages from the exclude list, edit the /etc/dnf/dnf.conf configuration file and remove the desired Leapp packages from the exclude list. To remove all packages from the exclude list: Remove remaining RHEL 7 packages, including remaining Leapp packages. Determine old kernel versions: Remove weak modules from the old kernel. If you have multiple old kernels, repeat the following step for each kernel: Replace <version> with the kernel version determined in the step, for example: Note Ignore the following error message, which is generated if the kernel package has been previously removed: Remove the old kernel from the boot loader entry. If you have multiple old kernels, repeat this step for each kernel: Replace version with the kernel version determined in the step, for example: Locate remaining RHEL 7 packages: Remove remaining RHEL 7 packages, including old kernel packages, and the kernel-workaround package from your RHEL 8 system. To ensure that RPM dependencies are maintained, use YUM or DNF when performing these actions. Review the transaction before accepting to ensure no packages are unintentionally removed. For example: Remove remaining Leapp dependency packages: Remove any remaining empty directories: Optional: Remove all remaining upgrade-related data from the system: Important Removing this data might limit Red Hat Support's ability to investigate and troubleshoot post-upgrade problems. Disable YUM repositories whose packages cannot be installed or used on RHEL 8. Repositories managed by RHSM are handled automatically. To disable these repositories: Replace <repository_id> with the repository ID. Replace the old rescue kernel and initial RAM disk with the current kernel and disk: Remove the existing rescue kernel and initial RAM disk: Reinstall the rescue kernel and related initial RAM disk: Note If your system's kernel package has a different name, such as on real-time systems, replace kernel-core with the correct package name. If your system is on the IBM Z architecture, update the zipl boot loader: Optional: Review, remediate, and then remove the rpmnew , rpmsave , and leappsave files. Note that rpmsave and leappsave are equivalent and can be handled similarly. For more information, see What are rpmnew & rpmsave files? Re-evaluate and re-apply your security policies. Especially, change the SELinux mode to enforcing. For details, see Applying security policies . Verification Verify that the old kernels have been removed from the boot loader entry: Verify that the previously removed rescue kernel and rescue initial RAM disk files have been created for the current kernel: Verify the rescue boot entry refers to the existing rescue files. See the grubby output: | [
"yum config-manager --save --setopt exclude=''",
"cd /lib/modules && ls -d *.el7*",
"[ -x /usr/sbin/weak-modules ] && /usr/sbin/weak-modules --remove-kernel <version>",
"[ -x /usr/sbin/weak-modules ] && /usr/sbin/weak-modules --remove-kernel 3.10.0-1160.25.1.el7.x86_64",
"/usr/sbin/weak-modules: line 1081: cd: /lib/modules/ <version> /weak-updates: No such file or directory",
"/bin/kernel-install remove <version> /lib/modules/ <version> /vmlinuz",
"/bin/kernel-install remove 3.10.0-1160.25.1.el7.x86_64 /lib/modules/3.10.0-1160.25.1.el7.x86_64/vmlinuz",
"rpm -qa | grep -e '\\.el[67]' | grep -vE '^(gpg-pubkey|libmodulemd|katello-ca-consumer)' | sort",
"yum remove kernel-workaround USD(rpm -qa | grep \\.el7 | grep -vE 'gpg-pubkey|libmodulemd|katello-ca-consumer')",
"yum remove leapp-deps-el8 leapp-repository-deps-el8",
"rm -r /lib/modules/*el7*",
"rm -rf /var/log/leapp /root/tmp_leapp_py3 /var/lib/leapp",
"yum config-manager --set-disabled <repository_id>",
"rm /boot/vmlinuz-*rescue* /boot/initramfs-*rescue*",
"/usr/lib/kernel/install.d/51-dracut-rescue.install add \"USD(uname -r)\" /boot \"/boot/vmlinuz-USD(uname -r)\"",
"zipl",
"grubby --info=ALL | grep \"\\.el7\" || echo \"Old kernels are not present in the boot loader.\"",
"ls /boot/vmlinuz-*rescue* /boot/initramfs-*rescue* lsinitrd /boot/initramfs-*rescue*.img | grep -qm1 \"USD(uname -r)/kernel/\" && echo \"OK\" || echo \"FAIL\"",
"grubby --info USD(ls /boot/vmlinuz-*rescue*)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_7_to_rhel_8/performing-post-upgrade-tasks-rhel-7-to-rhel-8_upgrading-from-rhel-7-to-rhel-8 |
6.8. Modifying Resource Parameters | 6.8. Modifying Resource Parameters To modify the parameters of a configured resource, use the following command. The following sequence of commands show the initial values of the configured parameters for resource VirtualIP , the command to change the value of the ip parameter, and the values following the update command. | [
"pcs resource update resource_id [ resource_options ]",
"pcs resource show VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.168.0.120 cidr_netmask=24 Operations: monitor interval=30s pcs resource update VirtualIP ip=192.169.0.120 pcs resource show VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.169.0.120 cidr_netmask=24 Operations: monitor interval=30s"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/resourcemodify |
19.2. Types | 19.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with Squid. Different types allow you to configure flexible access: httpd_squid_script_exec_t This type is used for utilities such as cachemgr.cgi , which provides a variety of statistics about Squid and its configuration. squid_cache_t Use this type for data that is cached by Squid, as defined by the cache_dir directive in /etc/squid/squid.conf . By default, files created in or copied into the /var/cache/squid/ and /var/spool/squid/ directories are labeled with the squid_cache_t type. Files for the squidGuard URL redirector plug-in for squid created in or copied to the /var/squidGuard/ directory are also labeled with the squid_cache_t type. Squid is only able to use files and directories that are labeled with this type for its cached data. squid_conf_t This type is used for the directories and files that Squid uses for its configuration. Existing files, or those created in or copied to the /etc/squid/ and /usr/share/squid/ directories are labeled with this type, including error messages and icons. squid_exec_t This type is used for the squid binary, /usr/sbin/squid . squid_log_t This type is used for logs. Existing files, or those created in or copied to /var/log/squid/ or /var/log/squidGuard/ must be labeled with this type. squid_initrc_exec_t This type is used for the initialization file required to start squid which is located at /etc/rc.d/init.d/squid . squid_var_run_t This type is used by files in the /var/run/ directory, especially the process id (PID) named /var/run/squid.pid which is created by Squid when it runs. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-squid_caching_proxy-types |
Appendix C. Configuring Ansible inventory location | Appendix C. Configuring Ansible inventory location As an option, you can configure inventory location files for the ceph-ansible staging and production environments. Prerequisites An Ansible administration node. Root-level access to the Ansible administration node. The ceph-ansible package is installed on the node. Procedure Navigate to the /usr/share/ceph-ansible directory: Create subdirectories for staging and production: Edit the ansible.cfg file and add the following lines: Create an inventory 'hosts' file for each environment: Open and edit each hosts file and add the Ceph Monitor nodes under the [mons] section: Example Note By default, playbooks run in the staging environment. To run the playbook in the production environment: Additional Resources For more information about installing the ceph-ansible package, see Installing a Red Hat Storage Cluster . | [
"cd /usr/share/ceph-ansible",
"[ansible@admin ceph-ansible]USD mkdir -p inventory/staging inventory/production",
"[defaults] inventory = ./inventory/staging # Assign a default inventory directory",
"[ansible@admin ceph-ansible]USD touch inventory/staging/hosts [ansible@admin ceph-ansible]USD touch inventory/production/hosts",
"[mons] MONITOR_NODE_NAME_1 MONITOR_NODE_NAME_1 MONITOR_NODE_NAME_1",
"[mons] mon-stage-node1 mon-stage-node2 mon-stage-node3",
"[ansible@admin ceph-ansible]USD ansible-playbook -i inventory/production playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/configuring-ansible-inventory-location-install |
Preface | Preface RHTAP empowers teams with its ready-to-use software templates and build pipeline configurations, designed to seamlessly integrate security practices into your development processes. These tools not only alleviate the burden of security considerations for developers but also enhance focus on innovation. Cluster administrators play a pivotal role in tailoring these resources to fit the unique requirements of their on-prem environments, including: Customizing software templates to meet specific organizational needs Modifying build pipeline configurations to align with project goals Configuring GitLab Webhooks for automated pipeline triggers Such customizations streamline development workflows, addressing common concerns around pipelines, vulnerabilities, and policy compliance, thereby letting developers prioritize coding. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/customizing_red_hat_trusted_application_pipeline/pr01 |
18.2. Remote Management with SSH | 18.2. Remote Management with SSH The ssh package provides an encrypted network protocol that can securely send management functions to remote virtualization servers. The method described below uses the libvirt management connection, securely tunneled over an SSH connection, to manage the remote machines. All the authentication is done using SSH public key cryptography and passwords or passphrases gathered by your local SSH agent. In addition, the VNC console for each guest is tunneled over SSH . When using using SSH for remotely managing your virtual machines, be aware of the following problems: You require root log in access to the remote machine for managing virtual machines. The initial connection setup process may be slow. There is no standard or trivial way to revoke a user's key on all hosts or guests. SSH does not scale well with larger numbers of remote machines. Note Red Hat Virtualization enables remote management of large numbers of virtual machines. For further details, see the Red Hat Virtualization documentation . The following packages are required for SSH access: openssh openssh-askpass openssh-clients openssh-server Configuring Password-less or Password-managed SSH Access for virt-manager The following instructions assume you are starting from scratch and do not already have SSH keys set up. If you have SSH keys set up and copied to the other systems, you can skip this procedure. Important SSH keys are user-dependent and may only be used by their owners. A key's owner is the user who generated it. Keys may not be shared across different users. virt-manager must be run by the user who owns the keys to connect to the remote host. That means, if the remote systems are managed by a non-root user, virt-manager must be run in unprivileged mode. If the remote systems are managed by the local root user, then the SSH keys must be owned and created by root. You cannot manage the local host as an unprivileged user with virt-manager . Optional: Changing user Change user, if required. This example uses the local root user for remotely managing the other hosts and the local host. Generating the SSH key pair Generate a public key pair on the machine where virt-manager is used. This example uses the default key location, in the ~/.ssh/ directory. Copying the keys to the remote hosts Remote login without a password, or with a pass-phrase, requires an SSH key to be distributed to the systems being managed. Use the ssh-copy-id command to copy the key to root user at the system address provided (in the example, [email protected] ). Afterwards, try logging into the machine and check the .ssh/authorized_keys file to make sure unexpected keys have not been added: Repeat for other systems, as required. Optional: Add the passphrase to the ssh-agent Add the pass-phrase for the SSH key to the ssh-agent , if required. On the local host, use the following command to add the pass-phrase (if there was one) to enable password-less login. This command will fail to run if the ssh-agent is not running. To avoid errors or conflicts, make sure that your SSH parameters are set correctly. See the Red Hat Enterprise System Administration Guide for more information. The libvirt daemon ( libvirtd ) The libvirt daemon provides an interface for managing virtual machines. You must have the libvirtd daemon installed and running on every remote host that you intend to manage this way. After libvirtd and SSH are configured, you should be able to remotely access and manage your virtual machines. You should also be able to access your guests with VNC at this point. Accessing Remote Hosts with virt-manager Remote hosts can be managed with the virt-manager GUI tool. SSH keys must belong to the user executing virt-manager for password-less login to work. Start virt-manager . Open the File ⇒ Add Connection menu. Figure 18.1. Add connection menu Use the drop down menu to select hypervisor type, and click the Connect to remote host check box to open the Connection Method (in this case Remote tunnel over SSH), enter the User name and Hostname , then click Connect . | [
"su -",
"ssh-keygen -t rsa",
"ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected] [email protected]'s password:",
"ssh [email protected]",
"ssh-add ~/.ssh/id_rsa",
"ssh root@ somehost # systemctl enable libvirtd.service # systemctl start libvirtd.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Remote_management_of_guests-Remote_management_with_SSH |
Chapter 8. Planning your overcloud | Chapter 8. Planning your overcloud The following section contains some guidelines for planning various aspects of your Red Hat OpenStack Platform (RHOSP) environment. This includes defining node roles, planning your network topology, and storage. Important Do not rename your overcloud nodes after they have been deployed. Renaming a node after deployment creates issues with instance management. 8.1. Node roles Director includes the following default node types to build your overcloud: Controller Provides key services for controlling your environment. This includes the dashboard (horizon), authentication (keystone), image storage (glance), networking (neutron), orchestration (heat), and high availability services. A Red Hat OpenStack Platform (RHOSP) environment requires three Controller nodes for a highly available production-level environment. Note Use environments with one Controller node only for testing purposes, not for production. Environments with two Controller nodes or more than three Controller nodes are not supported. Compute A physical server that acts as a hypervisor and contains the processing capabilities required to run virtual machines in the environment. A basic RHOSP environment requires at least one Compute node. Ceph Storage A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This deployment role is optional. Swift Storage A host that provides external object storage to the OpenStack Object Storage (swift) service. This deployment role is optional. The following table contains some examples of different overclouds and defines the node types for each scenario. Table 8.1. Node Deployment Roles for Scenarios Controller Compute Ceph Storage Swift Storage Total Small overcloud 3 1 - - 4 Medium overcloud 3 3 - - 6 Medium overcloud with additional object storage 3 3 - 3 9 Medium overcloud with Ceph Storage cluster 3 3 3 - 9 In addition, consider whether to split individual services into custom roles. For more information about the composable roles architecture, see "Composable Services and Custom Roles" in the Director installation and usage guide. Table 8.2. Node Deployment Roles for Proof of Concept Deployment Undercloud Controller Compute Ceph Storage Total Proof of concept 1 1 1 1 4 Warning The Red Hat OpenStack Platform maintains an operational Ceph Storage cluster during day-2 operations. Therefore, some day-2 operations, such as upgrades or minor updates of the Ceph Storage cluster, are not possible in deployments with fewer than three MONs or three storage nodes. If you use a single Controller node or a single Ceph Storage node, day-2 operations will fail. 8.2. Overcloud networks It is important to plan the networking topology and subnets in your environment so that you can map roles and services to communicate with each other correctly. Red Hat OpenStack Platform (RHOSP) uses the Openstack Networking (neutron) service, which operates autonomously and manages software-based networks, static and floating IP addresses, and DHCP. By default, director configures nodes to use the Provisioning / Control Plane for connectivity. However, it is possible to isolate network traffic into a series of composable networks, that you can customize and assign services. In a typical RHOSP installation, the number of network types often exceeds the number of physical network links. To connect all the networks to the proper hosts, the overcloud uses VLAN tagging to deliver more than one network on each interface. Most of the networks are isolated subnets but some networks require a Layer 3 gateway to provide routing for Internet access or infrastructure network connectivity. If you use VLANs to isolate your network traffic types, you must use a switch that supports 802.1Q standards to provide tagged VLANs. Note It is recommended that you deploy a project network (tunneled with GRE or VXLAN) even if you intend to use a neutron VLAN mode with tunneling disabled at deployment time. This requires minor customization at deployment time and leaves the option available to use tunnel networks as utility networks or virtualization networks in the future. You still create Tenant networks using VLANs, but you can also create VXLAN tunnels for special-use networks without consuming tenant VLANs. It is possible to add VXLAN capability to a deployment with a Tenant VLAN, but it is not possible to add a Tenant VLAN to an existing overcloud without causing disruption. Director also includes a set of templates that you can use to configure NICs with isolated composable networks. The following configurations are the default configurations: Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types. Bonded NIC configuration - One NIC for the Provisioning network on the native VLAN and two NICs in a bond for tagged VLANs for the different overcloud network types. Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type. You can also create your own templates to map a specific NIC configuration. The following details are also important when you consider your network configuration: During the overcloud creation, you refer to NICs using a single name across all overcloud machines. Ideally, you should use the same NIC on each overcloud node for each respective network to avoid confusion. For example, use the primary NIC for the Provisioning network and the secondary NIC for the OpenStack services. Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the External NIC and any other NICs on the system. Also ensure that the Provisioning NIC has PXE boot at the top of the boot order, ahead of hard disks and CD/DVD drives. All overcloud bare metal systems require a supported power management interface, such as an Intelligent Platform Management Interface (IPMI), so that director can control the power management of each node. Make a note of the following details for each overcloud system: the MAC address of the Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This information is useful later when you configure the overcloud nodes. If an instance must be accessible from the external internet, you can allocate a floating IP address from a public network and associate the floating IP with an instance. The instance retains its private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a floating IP address can be assigned only to a single instance rather than multiple private IP addresses. However, the floating IP address is reserved for use only by a single tenant, which means that the tenant can associate or disassociate the floating IP address with a particular instance as required. This configuration exposes your infrastructure to the external internet and you must follow suitable security practices. To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges. Red Hat recommends using DNS hostname resolution so that your overcloud nodes can connect to external services, such as the Red Hat Content Delivery Network and network time servers. Red Hat recommends that the Provisioning interface, External interface, and any floating IP interfaces be left at the default MTU of 1500. Connectivity problems are likely to occur otherwise. This is because routers typically cannot forward jumbo frames across Layer 3 boundaries. Note You can virtualize the overcloud control plane if you are using Red Hat Virtualization (RHV). For more information, see Creating virtualized control planes . 8.3. Overcloud storage You can use Red Hat Ceph Storage nodes as the back end storage for your overcloud environment. You can configure your overcloud to use the Ceph nodes for the following types of storage: Images The Image service (glance) manages the images that are used for creating virtual machine instances. Images are immutable binary blobs. You can use the Image service to store images in a Ceph Block Device. For information about supported image formats, see The Image service (glance) in Creating and Managing Images . Volumes The Block Storage service (cinder) manages persistent storage volumes for instances. The Block Storage service volumes are block devices. You can use a volume to boot an instance, and you can attach volumes to running instances. You can use the Block Storage service to boot a virtual machine using a copy-on-write clone of an image. Objects The Ceph Object Gateway (RGW) provides the default overcloud object storage on the Ceph cluster when your overcloud storage back end is Red Hat Ceph Storage. If your overcloud does not have Red Hat Ceph Storage, then the overcloud uses the Object Storage service (swift) to provide object storage. You can dedicate overcloud nodes to the Object Storage service. This is useful in situations where you need to scale or replace Controller nodes in your overcloud environment but need to retain object storage outside of a high availability cluster. File Systems The Shared File Systems service (manila) manages shared file systems. You can use the Shared File Systems service to manage shares backed by a CephFS file system with data on the Ceph Storage nodes. Instance disks When you launch an instance, the instance disk is stored as a file in the instance directory of the hypervisor. The default file location is /var/lib/nova/instances . For more information about Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . 8.3.1. Configuration considerations for overcloud storage nodes Instance security and performance Using LVM on an instance that uses a back end Block Storage volume causes issues with performance, volume visibility and availability, and data corruption. Use an LVM filter to mitigate visibility, availability, and data corruption issues. For more information, see Enabling LVM2 filtering on overcloud nodes in the Storage Guide , and the Red Hat Knowledgebase solution Using LVM on a cinder volume exposes the data to the compute host . Local disk partition sizes Consider the storage and retention requirements for your storage nodes, to determine if the following default disk partition sizes meet your requirements: Partition Default size / 8GB /tmp 1GB /var/log 10GB /var/log/audit 2GB /home 1GB /var Allocated the remaining size of the disk after all other partitions are allocated. To change the allocated disk size for a partition, update the role_growvols_args extra Ansible variable in the Ansible_playbooks definition in your overcloud-baremetal-deploy.yaml node definition file. For more information, see Provisioning bare metal nodes for the overcloud . If your partitions continue to fill up after you have optimized the configuration of your partition sizes, then perform one of the following tasks: Manually delete files from the affected partitions. Add a new physical disk and add it to the LVM volume group. For more information, see Configuring and managing logical volumes . Note Adding a new disk requires a support exception. Contact the Red Hat Customer Experience and Engagement team to discuss a support exception, if applicable, or other options. 8.4. Overcloud security Your OpenStack Platform implementation is only as secure as your environment. Follow good security principles in your networking environment to ensure that you control network access properly: Use network segmentation to mitigate network movement and isolate sensitive data. A flat network is much less secure. Restrict services access and ports to a minimum. Enforce proper firewall rules and password usage. Ensure that SELinux is enabled. For more information about securing your system, see the following Red Hat guides: Security Hardening for Red Hat Enterprise Linux 9 Using SELinux for Red Hat Enterprise Linux 9 8.5. Overcloud high availability To deploy a highly-available overcloud, director configures multiple Controller, Compute and Storage nodes to work together as a single cluster. In case of node failure, an automated fencing and re-spawning process is triggered based on the type of node that failed. For more information about overcloud high availability architecture and services, see High Availability Deployment and Usage . Note Deploying a highly available overcloud without STONITH is not supported. You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters . You can also configure high availability for Compute instances with director (Instance HA). This high availability mechanism automates evacuation and re-spawning of instances on Compute nodes in case of node failure. The requirements for Instance HA are the same as the general overcloud requirements, but you must perform a few additional steps to prepare your environment for the deployment. For more information about Instance HA and installation instructions, see the High Availability for Compute Instances guide. 8.6. Controller node requirements Controller nodes host the core services in a Red Hat OpenStack Platform environment, such as the Dashboard (horizon), the back-end database server, the Identity service (keystone) authentication, and high availability services. Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. Memory The minimum amount of memory is 32 GB. However, the amount of recommended memory depends on the number of vCPUs, which is based on the number of CPU cores multiplied by hyper-threading value. Use the following calculations to determine your RAM requirements: Controller RAM minimum calculation: Use 1.5 GB of memory for each vCPU. For example, a machine with 48 vCPUs should have 72 GB of RAM. Controller RAM recommended calculation: Use 3 GB of memory for each vCPU. For example, a machine with 48 vCPUs should have 144 GB of RAM For more information about measuring memory requirements, see "Red Hat OpenStack Platform Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal. Disk Storage and layout A minimum amount of 50 GB storage is required if the Object Storage service (swift) is not running on the Controller nodes. However, the Telemetry and Object Storage services are both installed on the Controllers, with both configured to use the root disk. These defaults are suitable for deploying small overclouds built on commodity hardware. These environments are typical of proof-of-concept and test environments. You can use these defaults to deploy overclouds with minimal planning, but they offer little in terms of workload capacity and performance. In an enterprise environment, however, the defaults could cause a significant bottleneck because Telemetry accesses storage constantly. This results in heavy disk I/O usage, which severely impacts the performance of all other Controller services. In this type of environment, you must plan your overcloud and configure it accordingly. Network Interface Cards A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Power management Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. Virtualization support Red Hat supports virtualized Controller nodes only on Red Hat Virtualization platforms. For more information, see Creating virtualized control planes . 8.7. Compute node requirements Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes require bare metal systems that support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances that they host. Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended that this processor has a minimum of 4 cores. Memory A minimum of 6 GB of RAM for the host operating system, plus additional memory to accommodate for the following considerations: Add additional memory that you intend to make available to virtual machine instances. Add additional memory to run special features or additional resources on the host, such as additional kernel modules, virtual switches, monitoring solutions, and other additional background tasks. If you intend to use non-uniform memory access (NUMA), Red Hat recommends 8GB per CPU socket node or 16 GB per socket node if you have more then 256 GB of physical RAM. Configure at least 4 GB of swap space. Disk space A minimum of 50 GB of available disk space. Network Interface Cards A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Power management Each Compute node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. 8.8. Red Hat Ceph Storage node requirements There are additional node requirements using director to create a Ceph Storage cluster: Hardware requirements including processor, memory, and network interface card selection and disk layout are available in the Red Hat Ceph Storage Hardware Guide . Each Ceph Storage node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server. Each Ceph Storage node must have at least two disks. RHOSP director uses cephadm to deploy the Ceph Storage cluster. The cephadm functionality does not support installing Ceph OSD on the root disk of the node. 8.9. Ceph Storage nodes and RHEL compatibility RHOSP 17.0 is supported on RHEL 9.0. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release. Before upgrading, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations . 8.10. Object Storage node requirements Object Storage nodes provide an object storage layer for the overcloud. The Object Storage proxy is installed on Controller nodes. The storage layer requires bare metal nodes with multiple disks on each node. Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. Memory Memory requirements depend on the amount of storage space. Use at minimum 1 GB of memory for each 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB for each 1 TB of hard disk space, especially for workloads with files smaller than 100GB. Disk space Storage requirements depend on the capacity needed for the workload. It is recommended to use SSD drives to store the account and container data. The capacity ratio of account and container data to objects is approximately 1 per cent. For example, for every 100TB of hard drive capacity, provide 1TB of SSD capacity for account and container data. However, this depends on the type of stored data. If you want to store mostly small objects, provide more SSD space. For large objects (videos, backups), use less SSD space. Disk layout The recommended node configuration requires a disk layout similar to the following example: /dev/sda - The root disk. Director copies the main overcloud image to the disk. /dev/sdb - Used for account data. /dev/sdc - Used for container data. /dev/sdd and onward - The object server disks. Use as many disks as necessary for your storage requirements. Network Interface Cards A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Power management Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server motherboard. 8.11. Overcloud repositories You run Red Hat OpenStack Platform 17.0 on Red Hat Enterprise Linux 9.0. As a result, you must lock the content from these repositories to the respective Red Hat Enterprise Linux version. Warning Any repositories except the ones specified here are not supported. Unless recommended, do not enable any other products or repositories except the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Note Satellite repositories are not listed because RHOSP 17.0 does not support Satellite. Satellite support is planned for a future release. Only Red Hat CDN is supported as a package repository and container registry. NFV repositories are not listed because RHOSP 17.0 does not support NFV. Controller node repositories The following table lists core repositories for Controller nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Red Hat OpenStack Platform 17 for RHEL 9 x86_64 (RPMs) openstack-17-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 5 for RHEL 9 x86_64 (RPMs) rhceph-5-tools-for-rhel-9-x86_64-rpms Tools for Red Hat Ceph Storage 5 for Red Hat Enterprise Linux 9. Compute and ComputeHCI node repositories The following table lists core repositories for Compute and ComputeHCI nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Red Hat OpenStack Platform 17 for RHEL 9 x86_64 (RPMs) openstack-17-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 5 for RHEL 9 x86_64 (RPMs) rhceph-5-tools-for-rhel-9-x86_64-rpms Tools for Red Hat Ceph Storage 5 for Red Hat Enterprise Linux 9. Real Time Compute repositories The following table lists repositories for Real Time Compute (RTC) functionality. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - Real Time (RPMs) rhel-9-for-x86_64-rt-rpms Repository for Real Time KVM (RT-KVM). Contains packages to enable the real time kernel. Enable this repository for all Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU to access this repository. Ceph Storage node repositories The following table lists Ceph Storage related repositories for the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) rhel-9-for-x86_64-baseos-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat OpenStack Platform 17 Director Deployment Tools for RHEL 9 x86_64 (RPMs) openstack-17-deployment-tools-for-rhel-9-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with standalone Ceph Storage subscriptions. If you use a combined OpenStack Platform and Ceph Storage subscription, use the openstack-17-for-rhel-9-x86_64-rpms repository. Red Hat OpenStack Platform 17 for RHEL 9 x86_64 (RPMs) openstack-17-for-rhel-9-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with combined OpenStack Platform and Ceph Storage subscriptions. If you use a standalone Ceph Storage subscription, use the openstack-17-deployment-tools-for-rhel-9-x86_64-rpms repository. Red Hat Ceph Storage Tools 5 for RHEL 9 x86_64 (RPMs) rhceph-5-tools-for-rhel-9-x86_64-rpms Provides tools for nodes to communicate with the Ceph Storage cluster. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. If you are using OVS on Ceph Storage nodes, add this repository to the network interface configuration (NIC) templates. 8.12. Node provisioning and configuration You provision the overcloud nodes for your Red Hat OpenStack Platform (RHOSP) environment by using either the OpenStack Bare Metal (ironic) service, or an external tool. When your nodes are provisioned, you configure them by using director. Provisioning with the OpenStack Bare Metal (ironic) service Provisioning overcloud nodes by using the Bare Metal service is the standard provisioning method. For more information, see Provisioning bare metal overcloud nodes . Provisioning with an external tool You can use an external tool, such as Red Hat Satellite, to provision overcloud nodes. This is useful if you want to create an overcloud without power management control, use networks that have DHCP/PXE boot restrictions, or if you want to use nodes that have a custom partitioning layout that does not rely on the overcloud-hardened-uefi-full.qcow2 image. This provisioning method does not use the OpenStack Bare Metal service (ironic) for managing nodes. For more information, see Configuring a basic overcloud with pre-provisioned nodes . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_planning-your-overcloud |
4.217. papi | 4.217. papi 4.217.1. RHBA-2011:1755 - papi bug fix and enhancement update An updated papi package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. PAPI (Performance Application Programming Interface) is a software library that provides access to the processor's performance-monitoring hardware. The papi package has been upgraded to upstream version 4.1.3, which provides a number of bug fixes and enhancements over the version. (BZ# 705893 ) All PAPI users are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/papi |
Chapter 9. AWS SQS Sink | Chapter 9. AWS SQS Sink Send message to an AWS SQS Queue 9.1. Configuration Options The following table summarizes the configuration options available for the aws-sqs-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string queueNameOrArn * Queue Name The SQS Queue name or ARN string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string autoCreateQueue Autocreate Queue Setting the autocreation of the SQS queue. boolean false Note Fields marked with an asterisk (*) are mandatory. 9.2. Dependencies At runtime, the aws-sqs-sink Kamelet relies upon the presence of the following dependencies: camel:aws2-sqs camel:core camel:kamelet 9.3. Usage This section describes how you can use the aws-sqs-sink . 9.3.1. Knative Sink You can use the aws-sqs-sink Kamelet as a Knative sink by binding it to a Knative object. aws-sqs-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 9.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 9.3.1.2. Procedure for using the cluster CLI Save the aws-sqs-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-sink-binding.yaml 9.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 9.3.2. Kafka Sink You can use the aws-sqs-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-sqs-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 9.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 9.3.2.2. Procedure for using the cluster CLI Save the aws-sqs-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-sink-binding.yaml 9.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 9.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-sqs-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-sqs-sink-binding.yaml",
"kamel bind channel:mychannel aws-sqs-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-sqs-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/aws-sqs-sink |
Chapter 11. Configuring OIDC for Red Hat Quay | Chapter 11. Configuring OIDC for Red Hat Quay Configuring OpenID Connect (OIDC) for Red Hat Quay can provide several benefits to your Red Hat Quay deployment. For example, OIDC allows users to authenticate to Red Hat Quay using their existing credentials from an OIDC provider, such as Red Hat Single Sign-On , Google, Github, Microsoft, or others. Other benefits of OIDC include centralized user management, enhanced security, and single sign-on (SSO). Overall, OIDC configuration can simplify user authentication and management, enhance security, and provide a seamless user experience for Red Hat Quay users. The following procedures show you how to configure Red Hat Single Sign-On and Azure AD. Collectively, these procedures include configuring OIDC on the Red Hat Quay Operator, and on standalone deployments by using the Red Hat Quay config tool. Note By following these procedures, you will be able to add any OIDC provider to Red Hat Quay, regardless of which identity provider you choose to use. 11.1. Configuring Red Hat Single Sign-On for Red Hat Quay Based on the Keycloak project, Red Hat Single Sign-On (RH-SSO) is an open source identity and access management (IAM) solution provided by Red Hat. RH-SSO allows organizations to manage user identities, secure applications, and enforce access control policies across their systems and applications. It also provides a unified authentication and authorization framework, which allows users to log in one time and gain access to multiple applications and resources without needing to re-authenticate. For more information, see Red Hat Single Sign-On . By configuring Red Hat Single Sign-On on Red Hat Quay, you can create a seamless authentication integration between Red Hat Quay and other application platforms like OpenShift Container Platform. 11.1.1. Configuring the Red Hat Single Sign-On Operator for the Red Hat Quay Operator Use the following procedure to configure Red Hat Single Sign-On for the Red Hat Quay Operator on OpenShift Container Platform. Prerequisites You have configured Red Hat Single Sign-On for the Red Hat Quay Operator. For more information, see Red Hat Single Sign-On Operator . You have configured TLS/SSL for your Red Hat Quay deployment and for Red Hat Single Sign-On. You have generated a single Certificate Authority (CA) and uploaded it to your Red Hat Single Sign-On Operator and to your Red Hat Quay configuration. You are logged into your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). Procedure Navigate to the Red Hat Single Sign-On Admin Console . On the OpenShift Container Platform Web Console , navigate to Network Route . Select the Red Hat Single Sign-On project from the drop-down list. Find the Red Hat Single Sign-On Admin Console in the Routes table. Select the Realm that you will use to configure Red Hat Quay. Click Clients under the Configure section of the navigation panel, and then click the Create button to add a new OIDC for Red Hat Quay. Enter the following information. Client ID: quay-enterprise Client Protocol: openid-connect Root URL: https://<quay endpoint>/ Click Save . This results in a redirect to the Clients setting panel. Navigate to Access Type and select Confidential . Navigate to Valid Redirect URIs . You must provide three redirect URIs. The value should be the fully qualified domain name of the Red Hat Quay registry appended with /oauth2/redhatsso/callback . For example: https://<quay_endpoint>/oauth2/redhatsso/callback https://<quay_endpoint>/oauth2/redhatsso/callback/attach https://<quay_endpoint>/oauth2/redhatsso/callback/cli Click Save and navigate to the new Credentials setting. Copy the value of the Secret. 11.1.2. Configuring the Red Hat Quay Operator to use Red Hat Single Sign-On Use the following procedure to configure Red Hat Single Sign-On with the Red Hat Quay Operator. Prerequisites You have configured the Red Hat Single Sign-On Operator for the Red Hat Quay Operator. Procedure Enter the Red Hat Quay config editor tool by navigating to Operators Installed Operators . Click Red Hat Quay Quay Registry . Then, click the name of your Red Hat Quay registry, and the URL listed with Config Editor Endpoint . Upload a custom SSL/TLS certificate to your OpenShift Container Platform deployment. Navigate to the Red Hat Quay config tool UI. Under Custom SSL Certificates , click Select file and upload your custom SSL/TLS certificates. Reconfigure your Red Hat Quay deployment. Scroll down to the External Authorization (OAuth) section. Click Add OIDC Provider . When prompted, enter redhatsso . Enter the following information: OIDC Server: The fully qualified domain name (FQDN) of the Red Hat Single Sign-On instance, appended with /auth/realms/ and the Realm name. You must include the forward slash at the end, for example, https://sso-redhat.example.com//auth/realms/<keycloak_realm_name>/ . Client ID: The client ID of the application that is being reistered with the identity provider, for example, quay-enterprise . Client Secret: The Secret from the Credentials tab of the quay-enterprise OIDC client settings. Service Name: The name that is displayed on the Red Hat Quay login page, for example, Red hat Single Sign On . Verified Email Address Claim: The name of the claim that is used to verify the email address of the user. Login Scopes: The scopes to send to the OIDC provider when performing the login flow, for example, openid . After configuration, you must click Add . Scroll down and click Validate Configuration Changes . Then, click Restart Now to deploy the Red Hat Quay Operator with OIDC enabled. 11.2. Configuring Azure AD OIDC for Red Hat Quay By integrating Azure AD authentication with Red Hat Quay, your organization can take advantage of the centralized user management and security features offered by Azure AD. Some features include the ability to manage user access to Red Hat Quay repositories based on their Azure AD roles and permissions, and the ability to enable multi-factor authentication and other security features provided by Azure AD. Azure Active Directory (Azure AD) authentication for Red Hat Quay allows users to authenticate and access Red Hat Quay using their Azure AD credentials. 11.2.1. Configuring Azure AD by using the Red Hat Quay config tool The following procedure configures Azure AD for Red Hat Quay using the config tool. Procedure Enter the Red Hat Quay config editor tool. If you are running a standalone Red Hat Quay deployment, you can enter the following command: Use your browser to navigate to the user interface for the configuration tool and log in. If you are on the Red Hat Quay Operator, navigate to Operators Installed Operators . Click Red Hat Quay Quay Registry . Then, click the name of your Red Hat Quay registry, and the URL listed with Config Editor Endpoint . Scroll down to the External Authorization (OAuth) section. Click Add OIDC Provider . When prompted, enter the ID for the ODIC provider. Note Your OIDC server must end with / . After the ODIC provider has been added, Red Hat Quay lists three callback URLs that must be registered on Azure. These addresses allow Azure to direct back to Red Hat Quay after authentication is confirmed. For example: https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/attach https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/cli After all required fields have been set, validate your settings by clicking Validate Configuration Changes . If any errors are reported, continue editing your configuration until the settings are valid and Red Hat Quay can connect to your database and Redis servers. 11.2.2. Configuring Azure AD by updating the Red Hat Quay config.yaml file Use the following procedure to configure Azure AD by updating the Red Hat Quay config.yaml file directly. Procedure Using the following procedure, you can add any ODIC provider to Red Hat Quay, regardless of which identity provider is being added. If your system has a firewall in use, or proxy enabled, you must whitelist all Azure API endpoints for each Oauth application that is created. Otherwise, the following error is returned: x509: certificate signed by unknown authority . Add the following information to your Red Hat Quay config.yaml file: AZURE_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_address_> 4 SERVICE_NAME: Azure AD 5 VERIFIED_EMAIL_CLAIM_NAME: <verified_email> 6 1 The parent key that holds the OIDC configuration settings. In this example, the parent key used is AZURE_LOGIN_CONFIG , however, the string AZURE can be replaced with any arbitrary string based on your specific needs, for example ABC123 .However, the following strings are not accepted: GOOGLE , GITHUB . These strings are reserved for their respecitve identity platforms and require a specific config.yaml entry contingent upon when platform you are using. 2 The client ID of the application that is being reistered with the identity provider. 3 The client secret of the application that is being registered with the identity provider. 4 The address of the OIDC server that is being used for authentication. In this example, you must use sts.windows.net as the issuer identifier. Using https://login.microsoftonline.com results in the following error: Could not create provider for AzureAD. Error: oidc: issuer did not match the issuer returned by provider, expected "https://login.microsoftonline.com/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5" got "https://sts.windows.net/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5/" . 5 The name of the service that is being authenticated. 6 The name of the claim that is used to verify the email address of the user. Proper configuration of Azure AD results three redirects with the following format: https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/attach https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/cli Restart your Red Hat Quay deployment. | [
"sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.10.9 config secret",
"AZURE_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_address_> 4 SERVICE_NAME: Azure AD 5 VERIFIED_EMAIL_CLAIM_NAME: <verified_email> 6"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/configuring-oidc-authentication |
Chapter 1. Image APIs | Chapter 1. Image APIs 1.1. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ImageSignature [image.openshift.io/v1] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ImageStreamImage [image.openshift.io/v1] Description ImageStreamImage represents an Image that is retrieved by image name from an ImageStream. User interfaces and regular users can use this resource to access the metadata details of a tagged image in the image stream history for viewing, since Image resources are not directly accessible to end users. A not found error will be returned if no such image is referenced by a tag within the ImageStream. Images are created when spec tags are set on an image stream that represent an image in an external registry, when pushing to the integrated registry, or when tagging an existing image from one image stream to another. The name of an image stream image is in the form "<STREAM>@<DIGEST>", where the digest is the content addressible identifier for the image (sha256:xxxxx... ). You can use ImageStreamImages as the from.kind of an image stream spec tag to reference an image exactly. The only operations supported on the imagestreamimage endpoint are retrieving the image. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. ImageStreamImport [image.openshift.io/v1] Description The image stream import resource provides an easy way for a user to find and import container images from other container image registries into the server. Individual images or an entire image repository may be imported, and users may choose to see the results of the import prior to tagging the resulting images into the specified image stream. This API is intended for end-user tools that need to see the metadata of the image prior to import (for instance, to generate an application from it). Clients that know the desired image can continue to create spec.tags directly into their image streams. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. ImageStreamLayers [image.openshift.io/v1] Description ImageStreamLayers describes information about the layers referenced by images in this image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. ImageStreamMapping [image.openshift.io/v1] Description ImageStreamMapping represents a mapping from a single image stream tag to a container image as well as the reference to the container image stream the image came from. This resource is used by privileged integrators to create an image resource and to associate it with an image stream in the status tags field. Creating an ImageStreamMapping will allow any user who can view the image stream to tag or pull that image, so only create mappings where the user has proven they have access to the image contents directly. The only operation supported for this resource is create and the metadata name and namespace should be set to the image stream containing the tag that should be updated. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. ImageStream [image.openshift.io/v1] Description An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a container image repository on a registry. Users typically update the spec.tags field to point to external images which are imported from container registries using credentials in your namespace with the pull secret type, or to existing image stream tags and images which are immediately accessible for tagging or pulling. The history of images applied to a tag is visible in the status.tags field and any user who can view an image stream is allowed to tag that image into their own image streams. Access to pull images from the integrated registry is granted by having the "get imagestreams/layers" permission on a given image stream. Users may remove a tag by deleting the imagestreamtag resource, which causes both spec and status for that tag to be removed. Image stream history is retained until an administrator runs the prune operation, which removes references that are no longer in use. To preserve a historical image, ensure there is a tag in spec pointing to that image by its digest. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.8. ImageStreamTag [image.openshift.io/v1] Description ImageStreamTag represents an Image that is retrieved by tag name from an ImageStream. Use this resource to interact with the tags and images in an image stream by tag, or to see the image details for a particular tag. The image associated with this resource is the most recently successfully tagged, imported, or pushed image (as described in the image stream status.tags.items list for this tag). If an import is in progress or has failed the image will be shown. Deleting an image stream tag clears both the status and spec fields of an image stream. If no image can be retrieved for a given tag, a not found error will be returned. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ImageTag [image.openshift.io/v1] Description ImageTag represents a single tag within an image stream and includes the spec, the status history, and the currently referenced image (if any) of the provided tag. This type replaces the ImageStreamTag by providing a full view of the tag. ImageTags are returned for every spec or status tag present on the image stream. If no tag exists in either form a not found error will be returned by the API. A create operation will succeed if no spec tag has already been defined and the spec field is set. Delete will remove both spec and status elements from the image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.10. SecretList [image.openshift.io/v1] Description SecretList is a list of Secret. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/image_apis/image-apis |
B.5.2. Useful Websites | B.5.2. Useful Websites The RPM website - http://www.rpm.org/ The RPM mailing list can be subscribed to, and its archives read from, here - https://lists.rpm.org/mailman/listinfo/rpm-list | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-rpm-useful-websites |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/making-open-source-more-inclusive |
7.153. parted | 7.153. parted 7.153.1. RHBA-2015:1357 - parted bug fix update Updated parted packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The parted packages provide tools to create, destroy, resize, move, and copy hard disk partitions. The parted program can be used for creating space for new operating systems, reorganizing disk usage, and copying data to new hard disks. Bug Fixes BZ# 1189328 Partitions that parted created while operating on device-mapper devices, such as mpath, could be smaller than expected. This update modifies parted to convert the native device sector size to 512 sector size when communicating with the device-mapper library. As a result, partitions are created with the correct size in the mentioned situation. BZ# 1180683 Previously, parted did not correctly handle disks or disk images where the backup GUID Partition Table (GPT) header was missing or could not be found at the expected location at the end of the disk. This situation can occur with disks that are shorter or longer than when they were originally created. Consequently, parted could terminate unexpectedly or prompt the user to have parted fix the problem and fail to do so. A patch has been applied to fix GPT backup header handling. Now, after the user instructs parted to fix the problem in the described scenario, parted succeeds. Users of parted are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-parted |
Chapter 139. Hazelcast Ringbuffer Component | Chapter 139. Hazelcast Ringbuffer Component Available as of Camel version 2.16 Avalaible from Camel 2.16 The Hazelcast ringbuffer component is one of Camel Hazelcast Components which allows you to access Hazelcast ringbuffer. Ringbuffer is a distributed data structure where the data is stored in a ring-like structure. You can think of it as a circular array with a certain capacity. 139.1. Options The Hazelcast Ringbuffer component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast Ringbuffer endpoint is configured using URI syntax: with the following path and query parameters: 139.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 139.1.2. Query Parameters (10 parameters): Name Description Default Type reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean defaultOperation (producer) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (producer) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (producer) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 139.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.hazelcast-ringbuffer.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-ringbuffer.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-ringbuffer.enabled Enable hazelcast-ringbuffer component true Boolean camel.component.hazelcast-ringbuffer.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-ringbuffer.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-ringbuffer.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 139.3. ringbuffer cache producer The ringbuffer producer provides 5 operations: * add * readonceHead * readonceTail * remainingCapacity * capacity Header Variables for the request message: Name Type Description CamelHazelcastOperationType String valid values are: put, get, removevalue, delete CamelHazelcastObjectId String the object id to store / find your object inside the cache 139.3.1. Sample for put : Java DSL: from("direct:put") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) .to(String.format("hazelcast-%sbar", HazelcastConstants.RINGBUFFER_PREFIX)); Spring DSL: <route> <from uri="direct:put" /> <log message="put.."/> <!-- If using version 2.8 and above set headerName to "CamelHazelcastOperationType" --> <setHeader headerName="hazelcast.operation.type"> <constant>add</constant> </setHeader> <to uri="hazelcast-ringbuffer:foo" /> </route> 139.3.2. Sample for readonce from head : Java DSL: from("direct:get") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.READ_ONCE_HEAD)) .toF("hazelcast-%sbar", HazelcastConstants.RINGBUFFER_PREFIX) .to("seda:out"); | [
"hazelcast-ringbuffer:cacheName",
"from(\"direct:put\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) .to(String.format(\"hazelcast-%sbar\", HazelcastConstants.RINGBUFFER_PREFIX));",
"<route> <from uri=\"direct:put\" /> <log message=\"put..\"/> <!-- If using version 2.8 and above set headerName to \"CamelHazelcastOperationType\" --> <setHeader headerName=\"hazelcast.operation.type\"> <constant>add</constant> </setHeader> <to uri=\"hazelcast-ringbuffer:foo\" /> </route>",
"from(\"direct:get\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.READ_ONCE_HEAD)) .toF(\"hazelcast-%sbar\", HazelcastConstants.RINGBUFFER_PREFIX) .to(\"seda:out\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-ringbuffer-component |
Chapter 9. File Systems | Chapter 9. File Systems SMB 2 and SMB 3 now support DFS Distributed File System (DFS), which was previously supported only with the Server Message Block (SMB) protocol version 1, is now also supported in SMB 2 and SMB 3. With this update, you can now mount DFS shares using the SMB 2 and SMB 3 protocols. (BZ#1481303) File system DAX now performs better when mapping a large amount of memory Prior to this enhancement, the Direct Access (DAX) feature mapped only 4KiB entries into application address space. This had a negative performance impact on workloads that mapped large amounts of memory, because it increased Translation Lookaside Buffer (TLB) pressure. With this update, the kernel supports 2MiB Page Middle Directory (PMD) faults in persistent memory mappings. This significantly reduces TLB pressure, and file system DAX now performs better when mapping a large amount of memory. (BZ#1457572) quotacheck is now faster on ext4 The quotacheck utility now directly scans ext4 file system metadata instead of analyzing each individual file for occupied disk size. If the file system contains many files, quota initialization and quota check are now significantly faster. (BZ#1393849) The CephFS kernel client is fully supported with Red Hat Ceph Storage 3 The Ceph File System (CephFS) kernel module enables Red Hat Enterprise Linux nodes to mount Ceph File Systems from Red Hat Ceph Storage clusters. The kernel client in Red Hat Enterprise Linux is a more efficient alternative to the Filesystem in Userspace (FUSE) client included with Red Hat Ceph Storage. Note that the kernel client currently lacks support for CephFS quotas. The CephFS kernel client was introduced in Red Hat Enterprise Linux 7.3 as a Technology Preview, and since the release of Red Hat Ceph Storage 3, CephFS is fully supported. For more information, see the Ceph File System Guide for Red Hat Ceph Storage 3: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_file_system_guide/ . (BZ#1626526) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_file_systems |
Chapter 2. Major Changes and Migration Considerations | Chapter 2. Major Changes and Migration Considerations This chapter discusses major changes and features that may affect migration from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. Read each section carefully for a clear understanding of how your system will be impacted by upgrading to Red Hat Enterprise Linux 7. 2.1. System Limitations Red Hat Enterprise Linux supported system limitations have changed between version 6 and version 7. Red Hat Enterprise Linux 7 now requires at least 1 GB of disk space to install. However, Red Hat recommends a minimum of 5 GB of disk space for all supported architectures. AMD64 and Intel 64 systems now require at least 1 GB of memory to run. Red Hat recommends at least 1 GB memory per logical CPU. AMD64 and Intel 64 systems are supported up to the following limits: at most 3 TB memory (theoretical limit: 64 TB) at most 160 logical CPUs (theoretical limit: 5120 logical CPUs) 64-bit Power systems now require at least 2 GB of memory to run. They are supported up to the following limits: at most 2 TB memory (theoretical limit: 64 TB) at most 128 logical CPUs (theoretical limit: 2048 logical CPUs) IBM System z systems now require at least 1 GB of memory to run, and are theoretically capable of supporting up to the following limits: at most 3 TB memory at most 101 logical CPUs The most up to date information about Red Hat Enterprise Linux 7 requirements and limitations is available online at https://access.redhat.com/site/articles/rhel-limits . To check whether your hardware or software is certified, see https://access.redhat.com/certifications . 2.2. Installation and Boot Read this section for a summary of changes made to installation tools and processes between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. 2.2.1. New Boot Loader Red Hat Enterprise Linux 7 introduces the GRUB2 boot loader, which replaces legacy GRUB in Red Hat Enterprise Linux 7.0 and later. GRUB2 supports more file systems and virtual block devices than its predecessor. It automatically scans for and configures available operating systems. The user interface has also been improved, and users have the option to skip boot loader installation. However, the move to GRUB2 also removes support for installing the boot loader to a formatted partition on BIOS machines with MBR-style partition tables. This behavior change was made because some file systems have automated optimization features that move parts of the core boot loader image, which could break the GRUB legacy boot loader. With GRUB2, the boot loader is installed in the space available between the partition table and the first partition on BIOS machines with MBR (Master Boot Record) style partition tables. BIOS machines with GPT (GUID Partition Table) style partition tables must create a special BIOS Boot Partition for the boot loader. UEFI machines continue to install the boot loader to the EFI System Partition. The recommended minimum partition sizes have also changed as a result of the new boot loader. Table 2.1, "Recommended minimum partition sizes" gives a summary of the new recommendations. Further information is available in MBR and GPT Considerations . Table 2.1. Recommended minimum partition sizes Partition BIOS & MBR BIOS & GPT UEFI & GPT /boot 500 MB / 10 GB swap At least twice the RAM. See Recommended Partitioning Scheme for details. boot loader N/A (Installed between the partition table and the first partition) Users can install GRUB2 to a formatted partition manually with the force option at the risk of causing file system damage, or use an alternative boot loader. For a list of alternative boot loaders, see the Installation Guide . If you have a dual-boot system, use GRUB2's operating system detection to automatically write a configuration file that can boot either operating system: Important Note, that if you have a dual-boot that is based on using UEFI uses other mechanism than MBR legacy based one. This means that you do not need to use EFI specific grub2 command: # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 2.2.1.1. Default Boot Entry for Debugging Default boot entry for systemd has been added to the /etc/grub.cfg file. It is no longer necessary to enable debugging manually. The default boot entry allows you to debug systems without affecting options at the boot time. 2.2.2. New Init System systemd is the system and service manager that replaces the SysV init system used in releases of Red Hat Enterprise Linux. systemd is the first process to start during boot, and the last process to terminate at shutdown. It coordinates the remainder of the boot process and configures the system for the user. Under systemd , interdependent programs can load in parallel, making the boot process considerably faster. systemd is largely compatible with SysV in | [
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"man systemd-ask-password",
"rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000",
"rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portname=foo rd.znet=ctc,0.0.0600,0.0.0601,protocol=bar",
"rd.driver.blacklist=mod1,mod2,",
"rd.driver.blacklist=firewire_ohci",
"/dev/critical /critical xfs defaults 1 2 /dev/optional /optional xfs defaults,nofail 1 2",
"mv -f /var/run /var/run.runmove~ ln -sfn ../run /var/run mv -f /var/lock /var/lock.lockmove~ ln -sfn ../run/lock /var/lock",
"find /usr/{lib,lib64,bin,sbin} -name '.usrmove'",
"dmesg journalctl -ab --full",
"systemctl enable tmp.mount",
"systemctl disable tmp.mount",
"AUTOCREATE_SERVER_KEYS=YES export SSH_USE_STRONG_RNG=1 export OPENSSL_DISABLE_AES_NI=1",
"AUTOCREATE_SERVER_KEYS=YES SSH_USE_STRONG_RNG=1 OPENSSL_DISABLE_AES_NI=1",
"man yum",
"mount -o acl /dev/loop0 test mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.",
"part /mnt/example --fstype=xfs",
"btrfs mount_point --data= level --metadata= level --label= label partitions",
"/dev/essential-disk /essential xfs auto,defaults 0 0 /dev/non-essential-disk /non-essential xfs auto,defaults,nofail 0 0",
"udev-alias: sdb (disk/by-id/ata-QEMU_HARDDISK_QM00001)",
"man ncat",
"undisclosed_recipients_header = To: undisclosed-recipients:;",
"postscreen_dnsbl_reply_map = texthash:/etc/postfix/dnsbl_reply",
"Secret DNSBL name Name in postscreen(8) replies secret.zen.spamhaus.org zen.spamhaus.org",
"man rpc.nfsd",
"man nfs",
"man nfsmount.conf",
"systemctl start named-chroot.service",
"systemctl stop named-chroot.service",
"man keepalived.conf",
"man 5 votequorum",
"firewall-offline-cmd",
"yum update -y opencryptoki",
"pkcsconf -s pkcsconf -t",
"systemctl stop pkcsslotd.service",
"ps ax | grep pkcsslotd",
"cp -r /var/lib/opencryptoki/ccatok /var/lib/opencryptoki/ccatok.backup",
"cd /var/lib/opencryptoki/ccatok pkcscca -m v2objectsv3 -v",
"rm /dev/shm/var.lib.opencryptoki.ccatok",
"systemctl start pkcsslotd.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-Red_Hat_Enterprise_Linux-Migration_Planning_Guide-Major_Changes_and_Migration_Considerations |
Chapter 86. LRA | Chapter 86. LRA Since Camel 2.21 The LRA module provides bindings of the Saga EIP with any MicroProfile compatible LRA Coordinator . 86.1. Dependencies When using lra with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-lra-starter</artifactId> </dependency> 86.2. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.lra.coordinator-context-path The context path of the LRA coordinator service. String camel.lra.coordinator-url The base URL of the LRA coordinator service (e.g. ). String camel.lra.enabled Global option to enable/disable component auto-configuration, default is true. true Boolean camel.lra.local-participant-context-path The context path of the local participant callback services. String camel.lra.local-participant-url The local URL where the coordinator should send callbacks to (e.g. ). String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-lra-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-lra-component-starter |
Chapter 3. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure | Chapter 3. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure 3.1. Replacing operational or failed storage devices on Azure installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Azure installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Azure installer-provisioned infrastructure . Replacing failed nodes on Azure installer-provisioned infrastructures . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_microsoft_azure |
5.82. geronimo-specs | 5.82. geronimo-specs 5.82.1. RHBA-2012:1397 - geronimo-specs bug fix update Updated geronimo-specs packages that fix one bug are now available for Red Hat Enterprise Linux 6. The geronimo-specs packages provide the specifications for Apache's ASF-licenced J2EE server Geronimo. Bug Fix BZ# 818755 Prior to this update, the geronimo-specs-compat package description contained inaccurate references. This update removes these references so that the description is now accurate. All users of geronimo-specs are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/geronimo-specs |
probe::ioscheduler.elv_add_request.kp | probe::ioscheduler.elv_add_request.kp Name probe::ioscheduler.elv_add_request.kp - kprobe based probe to indicate that a request was added to the request queue Synopsis ioscheduler.elv_add_request.kp Values disk_major Disk major number of the request disk_minor Disk minor number of the request rq_flags Request flags elevator_name The type of I/O elevator currently enabled q pointer to request queue rq Address of the request name Name of the probe point | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioscheduler-elv-add-request-kp |
Chapter 3. Red Hat Quay Robot Account overview | Chapter 3. Red Hat Quay Robot Account overview Robot Accounts are used to set up automated access to the repositories in your Red Hat Quay. registry. They are similar to OpenShift Container Platform service accounts. Setting up a Robot Account results in the following: Credentials are generated that are associated with the Robot Account. Repositories and images that the Robot Account can push and pull images from are identified. Generated credentials can be copied and pasted to use with different container clients, such as Docker, Podman, Kubernetes, Mesos, and so on, to access each defined repository. Robot Accounts can help secure your Red Hat Quay registry by offering various security advantages, such as the following: Specifying repository access. Granular permissions, such as Read (pull) or Write (push) access. They can also be equipped with Admin permissions if warranted. Designed for CI/CD pipelines, system integrations, and other automation tasks, helping avoid credential exposure in scripts, pipelines, or other environment variables. Robot Accounts use tokens instead of passwords, which provides the ability for an administrator to revoke the token in the event that it is compromised. Each Robot Account is limited to a single user namespace or Organization. For example, the Robot Account could provide access to all repositories for the user quayadmin . However, it cannot provide access to repositories that are not in the user's list of repositories. Robot Accounts can be created using the Red Hat Quay UI, or through the CLI using the Red Hat Quay API. After creation, Red Hat Quay administrators can leverage more advanced features with Robot Accounts, such as keyless authentication. 3.1. Creating a robot account by using the UI Use the following procedure to create a robot account using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Robot accounts tab Create robot account . In the Provide a name for your robot account box, enter a name, for example, robot1 . The name of your Robot Account becomes a combination of your username plus the name of the robot, for example, quayadmin+robot1 Optional. The following options are available if desired: Add the robot account to a team. Add the robot account to a repository. Adjust the robot account's permissions. On the Review and finish page, review the information you have provided, then click Review and finish . The following alert appears: Successfully created robot account with robot name: <organization_name> + <robot_name> . Alternatively, if you tried to create a robot account with the same name as another robot account, you might receive the following error message: Error creating robot account . Optional. You can click Expand or Collapse to reveal descriptive information about the robot account. Optional. You can change permissions of the robot account by clicking the kebab menu Set repository permissions . The following message appears: Successfully updated repository permission . Optional. You can click the name of your robot account to obtain the following information: Robot Account : Select this obtain the robot account token. You can regenerate the token by clicking Regenerate token now . Kubernetes Secret : Select this to download credentials in the form of a Kubernetes pull secret YAML file. Podman : Select this to copy a full podman login command line that includes the credentials. Docker Configuration : Select this to copy a full docker login command line that includes the credentials. 3.2. Creating a robot account by using the Red Hat Quay API Use the following procedure to create a robot account using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a new robot account for an organization using the PUT /api/v1/organization/{orgname}/robots/{robot_shortname} endpoint: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>" Example output {"name": "orgname+robot-name", "created": "Fri, 10 May 2024 15:11:00 -0000", "last_accessed": null, "description": "", "token": "<example_secret>", "unstructured_metadata": null} Enter the following command to create a new robot account for the current user with the PUT /api/v1/user/robots/{robot_shortname} endpoint: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/user/robots/<robot_name>" Example output {"name": "quayadmin+robot-name", "created": "Fri, 10 May 2024 15:24:57 -0000", "last_accessed": null, "description": "", "token": "<example_secret>", "unstructured_metadata": null} 3.3. Bulk managing robot account repository access Use the following procedure to manage, in bulk, robot account repository access by using the Red Hat Quay v2 UI. Prerequisites You have created a robot account. You have created multiple repositories under a single organization. Procedure On the Red Hat Quay v2 UI landing page, click Organizations in the navigation pane. On the Organizations page, select the name of the organization that has multiple repositories. The number of repositories under a single organization can be found under the Repo Count column. On your organization's page, click Robot accounts . For the robot account that will be added to multiple repositories, click the kebab icon Set repository permissions . On the Set repository permissions page, check the boxes of the repositories that the robot account will be added to. For example: Set the permissions for the robot account, for example, None , Read , Write , Admin . Click save . An alert that says Success alert: Successfully updated repository permission appears on the Set repository permissions page, confirming the changes. Return to the Organizations Robot accounts page. Now, the Repositories column of your robot account shows the number of repositories that the robot account has been added to. 3.4. Disabling robot accounts by using the UI Red Hat Quay administrators can manage robot accounts by disallowing users to create new robot accounts. Important Robot accounts are mandatory for repository mirroring. Setting the ROBOTS_DISALLOW configuration field to true breaks mirroring configurations. Users mirroring repositories should not set ROBOTS_DISALLOW to true in their config.yaml file. This is a known issue and will be fixed in a future release of Red Hat Quay. Use the following procedure to disable robot account creation. Prerequisites You have created multiple robot accounts. Procedure Update your config.yaml field to add the ROBOTS_DISALLOW variable, for example: ROBOTS_DISALLOW: true Restart your Red Hat Quay deployment. Verification: Creating a new robot account Navigate to your Red Hat Quay repository. Click the name of a repository. In the navigation pane, click Robot Accounts . Click Create Robot Account . Enter a name for the robot account, for example, <organization-name/username>+<robot-name> . Click Create robot account to confirm creation. The following message appears: Cannot create robot account. Robot accounts have been disabled. Please contact your administrator. Verification: Logging into a robot account On the command-line interface (CLI), attempt to log in as one of the robot accounts by entering the following command: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" <quay-server.example.com> The following error message is returned: Error: logging into "<quay-server.example.com>": invalid username/password You can pass in the log-level=debug flag to confirm that robot accounts have been deactivated: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" --log-level=debug <quay-server.example.com> ... DEBU[0000] error logging into "quay-server.example.com": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator. 3.5. Regenerating a robot account token by using the Red Hat Quay API Use the following procedure to regenerate a robot account token using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to regenerate a robot account token for an organization using the POST /api/v1/organization/{orgname}/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate" Example output {"name": "test-org+test", "created": "Fri, 10 May 2024 17:46:02 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} Enter the following command to regenerate a robot account token for the current user with the POST /api/v1/user/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate" Example output {"name": "quayadmin+test", "created": "Fri, 10 May 2024 14:12:11 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} 3.6. Deleting a robot account by using the UI Use the following procedure to delete a robot account using the Red Hat Quay UI. Procedure Log into your Red Hat Quay registry: Click the name of the Organization that has the robot account. Click Robot accounts . Check the box of the robot account to be deleted. Click the kebab menu. Click Delete . Type confirm into the textbox, then click Delete . 3.7. Deleting a robot account by using the Red Hat Quay API Use the following procedure to delete a robot account using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to delete a robot account for an organization using the DELETE /api/v1/organization/{orgname}/robots/{robot_shortname} endpoint: curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>" The CLI does not return information when deleting a robot account with the API. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/organization/{orgname}/robots command to see if details are returned for the robot account: USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots" Example output {"robots": []} Enter the following command to delete a robot account for the current user with the DELETE /api/v1/user/robots/{robot_shortname} endpoint: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" The CLI does not return information when deleting a robot account for the current user with the API. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/user/robots/{robot_shortname} command to see if details are returned for the robot account: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" Example output {"message":"Could not find robot with specified username"} | [
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"",
"{\"name\": \"orgname+robot-name\", \"created\": \"Fri, 10 May 2024 15:11:00 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"",
"{\"name\": \"quayadmin+robot-name\", \"created\": \"Fri, 10 May 2024 15:24:57 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"ROBOTS_DISALLOW: true",
"podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" <quay-server.example.com>",
"Error: logging into \"<quay-server.example.com>\": invalid username/password",
"podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" --log-level=debug <quay-server.example.com>",
"DEBU[0000] error logging into \"quay-server.example.com\": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator.",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"test-org+test\", \"created\": \"Fri, 10 May 2024 17:46:02 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"quayadmin+test\", \"created\": \"Fri, 10 May 2024 14:12:11 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"",
"{\"robots\": []}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"{\"message\":\"Could not find robot with specified username\"}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/managing_access_and_permissions/allow-robot-access-user-repo |
Chapter 28. Usability Analytics and Data Collection | Chapter 28. Usability Analytics and Data Collection Usability data collection is included with automation controller to collect data to better understand how automation controller users interact with it. Only users installing a trial of or a fresh installation of are opted-in for this data collection. Automation controller collects user data automatically to help improve the product. You can opt out or control the way automation controller collects data by setting your participation level in the User Interface settings in the Settings menu. 28.1. Setting up data collection participation Use the following procedure to set your participation level for data collection. Procedure From the navigation panel, select Settings . Select User Interface settings from the User Interface option. Click Edit . Select the desired level of data collection from the User Analytics Tracking State list: Off : Prevents any data collection. Anonymous : Enables data collection without your specific user data. Detailed : Enables data collection including your specific user data. Click Save to apply the settings, or Cancel to abandon the changes. For more information, see the Red Hat Privacy Statement . 28.2. Automation Analytics When you imported your license for the first time, you were automatically opted in for the collection of data that powers Automation Analytics, a cloud service that is part of the Ansible Automation Platform subscription. Important For opt-in of Automation Analytics to have any effect, your instance of automation controller must be running on Red Hat Enterprise Linux. As with Red Hat Insights, Automation Analytics is built to collect the minimum amount of data needed. No credential secrets, personal data, automation variables, or task output is gathered. For more information, see Details of data collection . To enable or disable this feature, use the following steps: From the navigation panels, select Settings . Select Miscellaneous System settings from the list of System options. Toggle the Gather data for Automation Analytics switch and enter your Red Hat customer credentials. Click Save . You can view the location to which the collection of insights data is uploaded in the Automation Analytics upload URL field on the Details page. By default, the data is collected every four hours. When you enable this feature, data is collected up to a month in arrears (or until the collection). You can turn off this data collection at any time in the Miscellaneous System settings of the System configuration window. This setting can also be enabled through the API by specifying INSIGHTS_TRACKING_STATE = true in either of these endpoints: api/v2/settings/all api/v2/settings/system The Automation Analytics generated from this data collection can be found on the Red Hat Cloud Services portal. Clusters data is the default view. This graph represents the number of job runs across all automation controller clusters over a period of time. The example shows a span of a week in a stacked bar-style chart that is organized by the number of jobs that ran successfully (in green) and jobs that failed (in red). Alternatively, you can select a single cluster to view its job status information. This multi-line chart represents the number of job runs for a single automation controller cluster for a specified period of time. The preceding example shows a span of a week, organized by the number of successfully running jobs (in green) and jobs that failed (in red). You can specify the number of successful and failed job runs for a selected cluster over a span of one week, two weeks, and monthly increments. On the clouds navigation panel, select Organization Statistics to view information for the following: Use by organization Job runs by organization Organization status 28.2.1. Use by organization The following chart represents the number of tasks run inside all jobs by a particular organization. 28.2.2. Job runs by organization This chart represents automation controller use across all automation controller clusters by organization, calculated by the number of jobs run by that organization. 28.2.3. Organization status This bar chart represents automation controller use by organization and date, which is calculated by the number of jobs run by that organization on a particular date. Alternatively, you can specify to show the number of job runs per organization in one week, two weeks, and monthly increments. 28.3. Details of data collection Automation Analytics collects the following classes of data from automation controller: Basic configuration, such as which features are enabled, and what operating system is being used Topology and status of the automation controller environment and hosts, including capacity and health Counts of automation resources: organizations, teams, and users inventories and hosts credentials (indexed by type) projects (indexed by type) templates schedules active sessions running and pending jobs Job execution details (start time, finish time, launch type, and success) Automation task details (success, host id, playbook/role, task name, and module used) You can use awx-manage gather_analytics (without --ship ) to inspect the data that automation controller sends, so that you can satisfy your data collection concerns. This creates a tarball that contains the analytics data that is sent to Red Hat. This file contains a number of JSON and CSV files. Each file contains a different set of analytics data. manifest.json config.json instance_info.json counts.json org_counts.json cred_type_counts.json inventory_counts.json projects_by_scm_type.json query_info.json job_counts.json job_instance_counts.json unified_job_template_table.csv unified_jobs_table.csv workflow_job_template_node_table.csv workflow_job_node_table.csv events_table.csv 28.3.1. manifest.json manifest.json is the manifest of the analytics data. It describes each file included in the collection, and what version of the schema for that file is included. The following is an example manifest.json file: "config.json": "1.1", "counts.json": "1.0", "cred_type_counts.json": "1.0", "events_table.csv": "1.1", "instance_info.json": "1.0", "inventory_counts.json": "1.2", "job_counts.json": "1.0", "job_instance_counts.json": "1.0", "org_counts.json": "1.0", "projects_by_scm_type.json": "1.0", "query_info.json": "1.0", "unified_job_template_table.csv": "1.0", "unified_jobs_table.csv": "1.0", "workflow_job_node_table.csv": "1.0", "workflow_job_template_node_table.csv": "1.0" } 28.3.2. config.json The config.json file contains a subset of the configuration endpoint /api/v2/config from the cluster. An example config.json is: { "ansible_version": "2.9.1", "authentication_backends": [ "social_core.backends.azuread.AzureADOAuth2", "django.contrib.auth.backends.ModelBackend" ], "external_logger_enabled": true, "external_logger_type": "splunk", "free_instances": 1234, "install_uuid": "d3d497f7-9d07-43ab-b8de-9d5cc9752b7c", "instance_uuid": "bed08c6b-19cc-4a49-bc9e-82c33936e91b", "license_expiry": 34937373, "license_type": "enterprise", "logging_aggregators": [ "awx", "activity_stream", "job_events", "system_tracking" ], "pendo_tracking": "detailed", "platform": { "dist": [ "redhat", "7.4", "Maipo" ], "release": "3.10.0-693.el7.x86_64", "system": "Linux", "type": "traditional" }, "total_licensed_instances": 2500, "controller_url_base": "https://ansible.rhdemo.io", "controller_version": "3.6.3" } Which includes the following fields: ansible_version : The system Ansible version on the host authentication_backends : The user authentication backends that are available. For more information, see Setting up social authentication or Setting up LDAP authentication . external_logger_enabled : Whether external logging is enabled external_logger_type : What logging backend is in use if enabled. For more information, see Logging and aggregation . logging_aggregators : What logging categories are sent to external logging. For more information, see Logging and aggregation . free_instances : How many hosts are available in the license. A value of zero means the cluster is fully consuming its license. install_uuid : A UUID for the installation (identical for all cluster nodes) instance_uuid : A UUID for the instance (different for each cluster node) license_expiry : Time to expiry of the license, in seconds license_type : The type of the license (should be 'enterprise' for most cases) pendo_tracking : State of usability_data_collection platform : The operating system the cluster is running on total_licensed_instances : The total number of hosts in the license controller_url_base : The base URL for the cluster used by clients (shown in Automation Analytics) controller_version : Version of the software on the cluster 28.3.3. instance_info.json The instance_info.json file contains detailed information on the instances that make up the cluster, organized by instance UUID. The following is an example instance_info.json file: { "bed08c6b-19cc-4a49-bc9e-82c33936e91b": { "capacity": 57, "cpu": 2, "enabled": true, "last_isolated_check": "2019-08-15T14:48:58.553005+00:00", "managed_by_policy": true, "memory": 8201400320, "uuid": "bed08c6b-19cc-4a49-bc9e-82c33936e91b", "version": "3.6.3" } "c0a2a215-0e33-419a-92f5-e3a0f59bfaee": { "capacity": 57, "cpu": 2, "enabled": true, "last_isolated_check": "2019-08-15T14:48:58.553005+00:00", "managed_by_policy": true, "memory": 8201400320, "uuid": "c0a2a215-0e33-419a-92f5-e3a0f59bfaee", "version": "3.6.3" } } Which includes the following fields: capacity : The capacity of the instance for executing tasks. cpu : Processor cores for the instance memory : Memory for the instance enabled : Whether the instance is enabled and accepting tasks managed_by_policy : Whether the instance's membership in instance groups is managed by policy, or manually managed version : Version of the software on the instance 28.3.4. counts.json The counts.json file contains the total number of objects for each relevant category in a cluster. The following is an example counts.json file: { "active_anonymous_sessions": 1, "active_host_count": 682, "active_sessions": 2, "active_user_sessions": 1, "credential": 38, "custom_inventory_script": 2, "custom_virtualenvs": 4, "host": 697, "inventories": { "normal": 20, "smart": 1 }, "inventory": 21, "job_template": 78, "notification_template": 5, "organization": 10, "pending_jobs": 0, "project": 20, "running_jobs": 0, "schedule": 16, "team": 5, "unified_job": 7073, "user": 28, "workflow_job_template": 15 } Each entry in this file is for the corresponding API objects in /api/v2 , with the exception of the active session counts. 28.3.5. org_counts.json The org_counts.json file contains information on each organization in the cluster, and the number of users and teams associated with that organization. The following is an example org_counts.json file: { "1": { "name": "Operations", "teams": 5, "users": 17 }, "2": { "name": "Development", "teams": 27, "users": 154 }, "3": { "name": "Networking", "teams": 3, "users": 28 } } 28.3.6. cred_type_counts.json The cred_type_counts.json file contains information on the different credential types in the cluster, and how many credentials exist for each type. The following is an example cred_type_counts.json file: { "1": { "credential_count": 15, "managed_by_controller": true, "name": "Machine" }, "2": { "credential_count": 2, "managed_by_controller": true, "name": "Source Control" }, "3": { "credential_count": 3, "managed_by_controller": true, "name": "Vault" }, "4": { "credential_count": 0, "managed_by_controller": true, "name": "Network" }, "5": { "credential_count": 6, "managed_by_controller": true, "name": "Amazon Web Services" }, "6": { "credential_count": 0, "managed_by_controller": true, "name": "OpenStack" }, 28.3.7. inventory_counts.json The inventory_counts.json file contains information on the different inventories in the cluster. The following is an example inventory_counts.json file: { "1": { "hosts": 211, "kind": "", "name": "AWS Inventory", "source_list": [ { "name": "AWS", "num_hosts": 211, "source": "ec2" } ], "sources": 1 }, "2": { "hosts": 15, "kind": "", "name": "Manual inventory", "source_list": [], "sources": 0 }, "3": { "hosts": 25, "kind": "", "name": "SCM inventory - test repo", "source_list": [ { "name": "Git source", "num_hosts": 25, "source": "scm" } ], "sources": 1 } "4": { "num_hosts": 5, "kind": "smart", "name": "Filtered AWS inventory", "source_list": [], "sources": 0 } } 28.3.8. projects_by_scm_type.json The projects_by_scm_type.json file provides a breakdown of all projects in the cluster, by source control type. The following is an example projects_by_scm_type.json file: { "git": 27, "hg": 0, "insights": 1, "manual": 0, "svn": 0 } 28.3.9. query_info.json The query_info.json file provides details on when and how the data collection happened. The following is an example query_info.json file: { "collection_type": "manual", "current_time": "2019-11-22 20:10:27.751267+00:00", "last_run": "2019-11-22 20:03:40.361225+00:00" } collection_type is one of manual or automatic . 28.3.10. job_counts.json The job_counts.json file provides details on the job history of the cluster, describing both how jobs were launched, and what their finishing status is. The following is an example job_counts.json file: "launch_type": { "dependency": 3628, "manual": 799, "relaunch": 6, "scheduled": 1286, "scm": 6, "workflow": 1348 }, "status": { "canceled": 7, "failed": 108, "successful": 6958 }, "total_jobs": 7073 } 28.3.11. job_instance_counts.json The job_instance_counts.json file provides the same detail as job_counts.json , broken down by instance. The following is an example job_instance_counts.json file: { "localhost": { "launch_type": { "dependency": 3628, "manual": 770, "relaunch": 3, "scheduled": 1009, "scm": 6, "workflow": 1336 }, "status": { "canceled": 2, "failed": 60, "successful": 6690 } } } Note that instances in this file are by hostname, not by UUID as they are in instance_info . 28.3.12. unified_job_template_table.csv The unified_job_template_table.csv file provides information on job templates in the system. Each line contains the following fields for the job template: id : Job template id. name : Job template name. polymorphic_ctype_id : The id of the type of template it is. model : The name of the polymorphic_ctype_id for the template. Examples include project , systemjobtemplate , jobtemplate , inventorysource , and workflowjobtemplate . created : When the template was created. modified : When the template was last updated. created_by_id : The userid that created the template. Blank if done by the system. modified_by_id : The userid that last modified the template. Blank if done by the system. current_job_id : Currently executing job id for the template, if any. last_job_id : Last execution of the job. last_job_run : Time of last execution of the job. last_job_failed : Whether the last_job_id failed. status : Status of last_job_id . next_job_run : scheduled execution of the template, if any. next_schedule_id : Schedule id for next_job_run , if any. 28.3.13. unified_jobs_table.csv The unified_jobs_table.csv file provides information on jobs run by the system. Each line contains the following fields for a job: id : Job id. name : Job name (from the template). polymorphic_ctype_id : The id of the type of job it is. model : The name of the polymorphic_ctype_id for the job. Examples include job and workflow . organization_id : The organization ID for the job. organization_name : Name for the organization_id . created : When the job record was created. started : When the job started executing. finished : When the job finished. elapsed : Elapsed time for the job in seconds. unified_job_template_id : The template for this job. launch_type : One of manual , scheduled , relaunched , scm , workflow , or dependency . schedule_id : The id of the schedule that launched the job, if any, instance_group_id : The instance group that executed the job. execution_node : The node that executed the job (hostname, not UUID). controller_node : The automation controller node for the job, if run as an isolated job, or in a container group. cancel_flag : Whether the job was canceled. status : Status of the job. failed : Whether the job failed. job_explanation : Any additional detail for jobs that failed to execute properly. forks : Number of forks executed for this job. 28.3.14. workflow_job_template_node_table.csv The workflow_job_template_node_table.csv file provides information on the nodes defined in workflow job templates on the system. Each line contains the following fields for a worfklow job template node: id : Node id. created : When the node was created. modified : When the node was last updated. unified_job_template_id : The id of the job template, project, inventory, or other parent resource for this node. workflow_job_template_id : The workflow job template that contains this node. inventory_id : The inventory used by this node. success_nodes : Nodes that are triggered after this node succeeds. failure_nodes : Nodes that are triggered after this node fails. always_nodes : Nodes that always are triggered after this node finishes. all_parents_must_converge : Whether this node requires all its parent conditions satisfied to start. 28.3.15. workflow_job_node_table.csv The workflow_job_node_table.csv provides information on the jobs that have been executed as part of a workflow on the system. Each line contains the following fields for a job run as part of a workflow: id : Node id. created : When the node was created. modified : When the node was last updated. job_id : The job id for the job run for this node. unified_job_template_id : The id of the job template, project, inventory, or other parent resource for this node. workflow_job_template_id : The workflow job template that contains this node. inventory_id : The inventory used by this node. success_nodes : Nodes that are triggered after this node succeeds. failure_nodes : Nodes that are triggered after this node fails. always_nodes : Nodes that always are triggered after this node finishes. do_not_run : Nodes that were not run in the workflow due to their start conditions not being triggered. all_parents_must_converge : Whether this node requires all its parent conditions satisfied to start. 28.3.16. events_table.csv The events_table.csv file provides information on all job events from all job runs in the system. Each line contains the following fields for a job event: id : Event id. uuid : Event UUID. created : When the event was created. parent_uuid : The parent UUID for this event, if any. event : The Ansible event type. task_action : The module associated with this event, if any (such as command or yum ). failed : Whether the event returned failed . changed : Whether the event returned changed . playbook : Playbook associated with the event. play : Play name from playbook. task : Task name from playbook. role : Role name from playbook. job_id : Id of the job this event is from. host_id : Id of the host this event is associated with, if any. host_name : Name of the host this event is associated with, if any. start : Start time of the task. end : End time of the task. duration : Duration of the task. warnings : Any warnings from the task or module. deprecations : Any deprecation warnings from the task or module. 28.4. Analytics Reports Reports from collection are accessible through the automation controller UI if you have superuser-level permissions. By including the analytics view on-prem where it is most convenient, you can access data that can affect your day-to-day work. This data is aggregated from the automation provided on console.redhat.com . Currently available is a view-only version of the Automation Calculator utility that shows a report that represents (possible) savings to the subscriber. Note This option is available for technical preview and is subject to change in a future release. To preview the analytic reports view, set the Enable Preview of New User Interface toggle to On from the Miscellaneous System Settings option of the Settings menu. After saving, logout and log back in to access the options under the Analytics section on the navigation panel. Host Metrics is another analytics report collected for host data. The ability to access this option from this part of the UI is currently in tech preview and is subject to change in a future release. For more information, see the Host Metrics view in Automation controller configuration . | [
"\"config.json\": \"1.1\", \"counts.json\": \"1.0\", \"cred_type_counts.json\": \"1.0\", \"events_table.csv\": \"1.1\", \"instance_info.json\": \"1.0\", \"inventory_counts.json\": \"1.2\", \"job_counts.json\": \"1.0\", \"job_instance_counts.json\": \"1.0\", \"org_counts.json\": \"1.0\", \"projects_by_scm_type.json\": \"1.0\", \"query_info.json\": \"1.0\", \"unified_job_template_table.csv\": \"1.0\", \"unified_jobs_table.csv\": \"1.0\", \"workflow_job_node_table.csv\": \"1.0\", \"workflow_job_template_node_table.csv\": \"1.0\" }",
"{ \"ansible_version\": \"2.9.1\", \"authentication_backends\": [ \"social_core.backends.azuread.AzureADOAuth2\", \"django.contrib.auth.backends.ModelBackend\" ], \"external_logger_enabled\": true, \"external_logger_type\": \"splunk\", \"free_instances\": 1234, \"install_uuid\": \"d3d497f7-9d07-43ab-b8de-9d5cc9752b7c\", \"instance_uuid\": \"bed08c6b-19cc-4a49-bc9e-82c33936e91b\", \"license_expiry\": 34937373, \"license_type\": \"enterprise\", \"logging_aggregators\": [ \"awx\", \"activity_stream\", \"job_events\", \"system_tracking\" ], \"pendo_tracking\": \"detailed\", \"platform\": { \"dist\": [ \"redhat\", \"7.4\", \"Maipo\" ], \"release\": \"3.10.0-693.el7.x86_64\", \"system\": \"Linux\", \"type\": \"traditional\" }, \"total_licensed_instances\": 2500, \"controller_url_base\": \"https://ansible.rhdemo.io\", \"controller_version\": \"3.6.3\" }",
"{ \"bed08c6b-19cc-4a49-bc9e-82c33936e91b\": { \"capacity\": 57, \"cpu\": 2, \"enabled\": true, \"last_isolated_check\": \"2019-08-15T14:48:58.553005+00:00\", \"managed_by_policy\": true, \"memory\": 8201400320, \"uuid\": \"bed08c6b-19cc-4a49-bc9e-82c33936e91b\", \"version\": \"3.6.3\" } \"c0a2a215-0e33-419a-92f5-e3a0f59bfaee\": { \"capacity\": 57, \"cpu\": 2, \"enabled\": true, \"last_isolated_check\": \"2019-08-15T14:48:58.553005+00:00\", \"managed_by_policy\": true, \"memory\": 8201400320, \"uuid\": \"c0a2a215-0e33-419a-92f5-e3a0f59bfaee\", \"version\": \"3.6.3\" } }",
"{ \"active_anonymous_sessions\": 1, \"active_host_count\": 682, \"active_sessions\": 2, \"active_user_sessions\": 1, \"credential\": 38, \"custom_inventory_script\": 2, \"custom_virtualenvs\": 4, \"host\": 697, \"inventories\": { \"normal\": 20, \"smart\": 1 }, \"inventory\": 21, \"job_template\": 78, \"notification_template\": 5, \"organization\": 10, \"pending_jobs\": 0, \"project\": 20, \"running_jobs\": 0, \"schedule\": 16, \"team\": 5, \"unified_job\": 7073, \"user\": 28, \"workflow_job_template\": 15 }",
"{ \"1\": { \"name\": \"Operations\", \"teams\": 5, \"users\": 17 }, \"2\": { \"name\": \"Development\", \"teams\": 27, \"users\": 154 }, \"3\": { \"name\": \"Networking\", \"teams\": 3, \"users\": 28 } }",
"{ \"1\": { \"credential_count\": 15, \"managed_by_controller\": true, \"name\": \"Machine\" }, \"2\": { \"credential_count\": 2, \"managed_by_controller\": true, \"name\": \"Source Control\" }, \"3\": { \"credential_count\": 3, \"managed_by_controller\": true, \"name\": \"Vault\" }, \"4\": { \"credential_count\": 0, \"managed_by_controller\": true, \"name\": \"Network\" }, \"5\": { \"credential_count\": 6, \"managed_by_controller\": true, \"name\": \"Amazon Web Services\" }, \"6\": { \"credential_count\": 0, \"managed_by_controller\": true, \"name\": \"OpenStack\" },",
"{ \"1\": { \"hosts\": 211, \"kind\": \"\", \"name\": \"AWS Inventory\", \"source_list\": [ { \"name\": \"AWS\", \"num_hosts\": 211, \"source\": \"ec2\" } ], \"sources\": 1 }, \"2\": { \"hosts\": 15, \"kind\": \"\", \"name\": \"Manual inventory\", \"source_list\": [], \"sources\": 0 }, \"3\": { \"hosts\": 25, \"kind\": \"\", \"name\": \"SCM inventory - test repo\", \"source_list\": [ { \"name\": \"Git source\", \"num_hosts\": 25, \"source\": \"scm\" } ], \"sources\": 1 } \"4\": { \"num_hosts\": 5, \"kind\": \"smart\", \"name\": \"Filtered AWS inventory\", \"source_list\": [], \"sources\": 0 } }",
"{ \"git\": 27, \"hg\": 0, \"insights\": 1, \"manual\": 0, \"svn\": 0 }",
"{ \"collection_type\": \"manual\", \"current_time\": \"2019-11-22 20:10:27.751267+00:00\", \"last_run\": \"2019-11-22 20:03:40.361225+00:00\" }",
"\"launch_type\": { \"dependency\": 3628, \"manual\": 799, \"relaunch\": 6, \"scheduled\": 1286, \"scm\": 6, \"workflow\": 1348 }, \"status\": { \"canceled\": 7, \"failed\": 108, \"successful\": 6958 }, \"total_jobs\": 7073 }",
"{ \"localhost\": { \"launch_type\": { \"dependency\": 3628, \"manual\": 770, \"relaunch\": 3, \"scheduled\": 1009, \"scm\": 6, \"workflow\": 1336 }, \"status\": { \"canceled\": 2, \"failed\": 60, \"successful\": 6690 } } }"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/controller-usability-analytics-data-collection |
Chapter 31. Security | Chapter 31. Security policycoreutils component, BZ# 1082676 Due to a bug in the fixfiles scripts, if the exclude_dirs file is defined to exclude directories from relabeling, running the fixfiles restore command applies incorrect labels on numerous files on the system. SELinux component Note that a number of daemons that were previously not confined are confined in Red Hat Enterprise Linux 7. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/known-issues-security |
7.5. Configuring System Services for SSSD | 7.5. Configuring System Services for SSSD SSSD provides interfaces towards several system services. Most notably: Name Service Switch (NSS) See Section 7.5.1, "Configuring Services: NSS" . Pluggable Authentication Modules (PAM) See Section 7.5.2, "Configuring Services: PAM" . OpenSSH See Configuring SSSD to Provide a Cache for the OpenSSH Services in the Linux Domain Identity, Authentication, and Policy Guide . autofs See Section 7.5.3, "Configuring Services: autofs " . sudo See Section 7.5.4, "Configuring Services: sudo " . 7.5.1. Configuring Services: NSS How SSSD Works with NSS The Name Service Switch (NSS) service maps system identities and services with configuration sources: it provides a central configuration store where services can look up sources for various configuration and name resolution mechanisms. SSSD can use NSS as a provider for several types of NSS maps. Most notably: User information (the passwd map) Groups (the groups map) Netgroups (the netgroups map) Services (the services map) Prerequisites Install SSSD. Configure NSS Services to Use SSSD Use the authconfig utility to enable SSSD: This updates the /etc/nsswitch.conf file to enable the following NSS maps to use SSSD: Open /etc/nsswitch.conf and add sss to the services map line: Configure SSSD to work with NSS Open the /etc/sssd/sssd.conf file. In the [sssd] section, make sure that NSS is listed as one of the services that works with SSSD. In the [nss] section, configure how SSSD interacts with NSS. For example: For a complete list of available options, see NSS configuration options in the sssd.conf (5) man page. Restart SSSD. Test That the Integration Works Correctly Display information about a user with these commands: id user getent passwd user 7.5.2. Configuring Services: PAM Warning A mistake in the PAM configuration file can lock users out of the system completely. Always back up the configuration files before performing any changes, and keep a session open so that you can revert any changes. Configure PAM to Use SSSD Use the authconfig utility to enable SSSD: This updates the PAM configuration to reference the SSSD modules, usually in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files. For example: For details, see the pam.conf (5) or pam (8) man pages. Configure SSSD to work with PAM Open the /etc/sssd/sssd.conf file. In the [sssd] section, make sure that PAM is listed as one of the services that works with SSSD. In the [pam] section, configure how SSSD interacts with PAM. For example: For a complete list of available options, see PAM configuration options in the sssd.conf (5) man page. Restart SSSD. Test That the Integration Works Correctly Try logging in as a user. Use the sssctl user-checks user_name auth command to check your SSSD configuration. For details, use the sssctl user-checks --help command. 7.5.3. Configuring Services: autofs How SSSD Works with automount The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), which saves system resources. For details on automount , see autofs in the Storage Administration Guide . You can configure automount to point to SSSD. In this setup: When a user attempts to mount a directory, SSSD contacts LDAP to obtain the required information about the current automount configuration. SSSD stores the information required by automount in a cache, so that users can mount directories even when the LDAP server is offline. Configure autofs to Use SSSD Install the autofs package. Open the /etc/nsswitch.conf file. On the automount line, change the location where to look for the automount map information from ldap to sss : Configure SSSD to work with autofs Open the /etc/sssd/sssd.conf file. In the [sssd] section, add autofs to the list of services that SSSD manages. Create a new [autofs] section. You can leave it empty. For a list of available options, see AUTOFS configuration options in the sssd.conf (5) man page. Make sure an LDAP domain is available in sssd.conf , so that SSSD can read the automount information from LDAP. See Section 7.3.2, "Configuring an LDAP Domain for SSSD" . The [domain] section of sssd.conf accepts several autofs -related options. For example: For a complete list of available options, see DOMAIN SECTIONS in the sssd.conf (5) man page. If you do not provide additional autofs options, the configuration depends on the identity provider settings. Restart SSSD. Test the Configuration Use the automount -m command to print the maps from SSSD. 7.5.4. Configuring Services: sudo How SSSD Works with sudo The sudo utility gives administrative access to specified users. For more information about sudo , see The sudo utility documentation in the System Administrator's Guide . You can configure sudo to point to SSSD. In this setup: When a user attempts a sudo operation, SSSD contacts LDAP or AD to obtain the required information about the current sudo configuration. SSSD stores the sudo information in a cache, so that users can perform sudo operations even when the LDAP or AD server is offline. SSSD only caches sudo rules which apply to the local system, depending on the value of the sudoHost attribute. See the sssd-sudo (5) man page for details. Configure sudo to Use SSSD Open the /etc/nsswitch.conf file. Add SSSD to the list on the sudoers line. Configure SSSD to work with sudo Open the /etc/sssd/sssd.conf file. In the [sssd] section, add sudo to the list of services that SSSD manages. Create a new [sudo] section. You can leave it empty. For a list of available options, see SUDO configuration options in the sssd.conf (5) man page. Make sure an LDAP or AD domain is available in sssd.conf , so that SSSD can read the sudo information from the directory. For details, see: Section 7.3.2, "Configuring an LDAP Domain for SSSD" the Using Active Directory as an Identity Provider for SSSD section in the Windows Integration Guide . The [domain] section for the LDAP or AD domain must include these sudo -related parameters: Note Setting Identity Management or AD as the ID provider automatically enables the sudo provider. In this situation, it is not necessary to specify the sudo_provider parameter. For a complete list of available options, see DOMAIN SECTIONS in the sssd.conf (5) man page. For options available for a sudo provider, see the sssd-ldap (5) man page. Restart SSSD. If you use AD as the provider, you must extend the AD schema to support sudo rules. For details, see the sudo documentation. For details about providing sudo rules in LDAP or AD, see the sudoers.ldap (5) man page. | [
"yum install sssd",
"authconfig --enablesssd --update",
"passwd: files sss shadow: files sss group: files sss netgroup: files sss",
"services: files sss",
"[sssd] [... file truncated ...] services = nss , pam",
"[nss] filter_groups = root filter_users = root entry_cache_timeout = 300 entry_cache_nowait_percentage = 75",
"systemctl restart sssd.service",
"authconfig --enablesssdauth --update",
"[... file truncated ...] auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_sss.so use_first_pass auth required pam_deny.so [... file truncated ...]",
"[sssd] [... file truncated ...] services = nss, pam",
"[pam] offline_credentials_expiration = 2 offline_failed_login_attempts = 3 offline_failed_login_delay = 5",
"systemctl restart sssd.service",
"yum install autofs",
"automount: files sss",
"[sssd] services = nss,pam, autofs",
"[autofs]",
"[domain/LDAP] [... file truncated ...] autofs_provider=ldap ldap_autofs_search_base=cn=automount,dc=example,dc=com ldap_autofs_map_object_class=automountMap ldap_autofs_entry_object_class=automount ldap_autofs_map_name=automountMapName ldap_autofs_entry_key=automountKey ldap_autofs_entry_value=automountInformation",
"systemctl restart sssd.service",
"sudoers: files sss",
"[sssd] services = nss,pam, sudo",
"[sudo]",
"[domain/ LDAP_or_AD_domain ] sudo_provider = ldap ldap_sudo_search_base = ou=sudoers,dc= example ,dc= com",
"systemctl restart sssd.service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/configuring_services |
Chapter 7. Forwarding telemetry data | Chapter 7. Forwarding telemetry data You can use the OpenTelemetry Collector to forward your telemetry data. 7.1. Forwarding traces to a TempoStack instance To configure forwarding traces to a TempoStack instance, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources . Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. Procedure Create a service account for the OpenTelemetry Collector. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespaces resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR). Example OpenTelemetryCollector apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-simplest-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, "tempo-simplest-distributor:4317" in this example, which is already created. 2 The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the gRPC protocol. Tip You can deploy telemetrygen as a test: apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4 Additional resources OpenTelemetry Collector documentation Deployment examples on GitHub 7.2. Forwarding logs to a LokiStack instance You can deploy the OpenTelemetry Collector to forward logs to a LokiStack instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Loki Operator is installed. A supported LokiStack instance is deployed on the cluster. Procedure Create a service account for the OpenTelemetry Collector. Example ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging Create a cluster role that grants the Collector's service account the permissions to push logs to the LokiStack application tenant. Example ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: ["loki.grafana.com"] resourceNames: ["logs"] resources: ["application"] verbs: ["create"] - apiGroups: [""] resources: ["pods", "namespaces", "nodes"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets"] verbs: ["get", "list", "watch"] - apiGroups: ["extensions"] resources: ["replicasets"] verbs: ["get", "list", "watch"] Bind the cluster role to the service account. Example ClusterRoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging Create an OpenTelemetryCollector custom resource (CR) object. Example OpenTelemetryCollector CR object apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes["level"], ConvertCase(severity_text, "lower")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug] 1 Provides the following resource attributes to be used by the web console: kubernetes.namespace_name , kubernetes.pod_name , kubernetes.container_name , and log_type . 2 Enables the BearerTokenAuth Extension that is required by the OTLP HTTP Exporter. 3 Enables the OTLP HTTP Exporter to export logs from the Collector. Tip You can deploy telemetrygen as a test: apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name="telemetrygen" restartPolicy: Never backoffLimit: 4 | [
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: [\"loki.grafana.com\"] resourceNames: [\"logs\"] resources: [\"application\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"namespaces\", \"nodes\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"extensions\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes[\"level\"], ConvertCase(severity_text, \"lower\")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name=\"telemetrygen\" restartPolicy: Never backoffLimit: 4"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/red_hat_build_of_opentelemetry/otel-forwarding-telemetry-data |
Disconnected installation mirroring | Disconnected installation mirroring OpenShift Container Platform 4.13 Mirroring the installation container images Red Hat OpenShift Documentation Team | [
"./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1",
"sudo ./mirror-registry upgrade -v",
"sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v",
"sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v",
"./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1",
"./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key",
"./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage",
"./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>",
"export QUAY=/USDHOME/quay-install",
"cp ~/ssl.crt USDQUAY/quay-config",
"cp ~/ssl.key USDQUAY/quay-config",
"systemctl --user restart quay-app",
"./mirror-registry uninstall -v --quayRoot <example_directory_name>",
"sudo systemctl status <service>",
"systemctl --user status <service>",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-install",
"podman login registry.redhat.io",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"podman login <mirror_registry>",
"oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2",
"oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2",
"podman login <mirror_registry>",
"oc adm catalog mirror file://local/index/<repository>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>[/<repository>] --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]",
"manifests-<index_image_name>-<random_number>",
"manifests-index/<repository>/<index_image_name>-<random_number>",
"tar xvzf oc-mirror.tar.gz",
"chmod +x oc-mirror",
"sudo mv oc-mirror /usr/local/bin/.",
"oc mirror help",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"mkdir -p <directory_name>",
"cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"oc mirror init --registry example.com/mirror/oc-mirror-metadata > imageset-config.yaml 1",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.13 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}",
"oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2",
"oc mirror --config=./imageset-config.yaml \\ 1 file://<path_to_output_directory> 2",
"cd <path_to_output_directory>",
"ls",
"mirror_seq1_000000.tar",
"oc mirror --from=./mirror_seq1_000000.tar \\ 1 docker://registry.example:5000 2",
"oc apply -f ./oc-mirror-workspace/results-1639608409/",
"oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource -n openshift-marketplace",
"oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3",
"Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt",
"cd oc-mirror-workspace/",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: local: path: /home/user/metadata 1 mirror: platform: channels: - name: stable-4.13 2 type: ocp graph: false operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog 3 targetCatalog: my-namespace/redhat-operator-index 4 packages: - name: aws-load-balancer-operator - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 5 packages: - name: rhacs-operator additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 6",
"oc mirror --config=./imageset-config.yaml \\ 1 --include-local-oci-catalogs 2 docker://registry.example:5000 3",
"[[registry]] location = \"registry.redhat.io:5000\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"preprod-registry.example.com\" insecure = false",
"additionalImages: - name: registry.redhat.io/ubi8/ubi:latest",
"local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz",
"repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0",
"operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: elasticsearch-operator minVersion: '2.4.0'",
"operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'",
"architectures: - amd64 - arm64",
"channels: - name: stable-4.10 - name: stable-4.13",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: channels: - name: stable-4.10 minVersion: 4.10.10",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.11 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.13 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 full: true",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 targetCatalog: my-namespace/my-operator-catalog",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"s390x\" channels: - name: stable-4.13 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: graph: true # Required for the OSUS Operator architectures: - amd64 channels: - name: stable-4.12 minVersion: '4.12.28' maxVersion: '4.12.28' shortestPath: true type: ocp - name: eus-4.14 minVersion: '4.12.28' maxVersion: '4.14.16' shortestPath: true type: ocp"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/disconnected_installation_mirroring/index |
Chapter 1. Planning an upgrade | Chapter 1. Planning an upgrade An in-place upgrade is the recommended way to upgrade your system to a later major version of RHEL. To ensure that you are aware of all major changes between RHEL 6 and RHEL 7, consult the Migration Planning Guide before beginning the in-place upgrade process. You can also verify whether your system can be upgraded by running the Preupgrade Assistant . The Preupgrade Assistant assesses your system for potential problems that could interfere or inhibit the upgrade before any changes are made to your system. See also Known Issues . Note Once you perform an in-place upgrade on your system, it is possible to get the working system back in limited configurations of the system by using the Red Hat Upgrade Tool integrated rollback capability or by using suitable custom backup and recovery solution, for example, by using the Relax-and-Recover (ReaR) utility. For more information, see Rolling back the upgrade . This RHEL 6 to RHEL 7 upgrade procedure is available if your RHEL system meets the following criteria: Red Hat Enterprise Linux 6.10: Your system must have the latest RHEL 6.10 packages installed. Note that for RHEL 6.10, only the Extended Life Phase (ELP) support is available. Architecture and variant: Only the indicated combinations of architecture and variant from the following matrix can be upgraded: Product Variant Intel 64-bit architecture IBM POWER, big endian IBM Z 64-bit architecture Intel 32-bit architecture Server Edition Available Available Available Not available HPC Compute Node Available N/A N/A Not available Desktop Edition Not Available N/A N/A Not available Workstation Edition Not available N/A N/A Not available Server running CloudForms software Not available N/A N/A N/A Server running Satellite software Not available. To upgrade Satellite environments from RHEL 6 to RHEL 7, see the Red Hat Satellite Installation Guide . N/A N/A N/A Note Upgrades of 64-bit IBM Z systems are allowed unless Direct Access Storage Device (DASD) with Linux Disk Layout (LDL) is used. Supported packages: The in-place upgrade is available for the following packages: Packages installed from the base repository, for example, the rhel-6-server-rpms if the system is on the RHEL 6 Server for the Intel architecture. The Preupgrade Assistant, Red Hat Upgrade Tool, and any other packages that are required for the upgrade. Note It is recommended to perform the upgrade with a minimum number of packages installed. File systems: File systems formats are intact. As a result, file systems have the same limitations as when they were originally created. Desktop: System upgrades with GNOME and KDE installs are not allowed. For more information, see Upgrading from RHEL 6 to RHEL 7 on Gnome Desktop Environment failed . Virtualization: Upgrades with KVM or VMware virtualization are available. Upgrades of RHEL on Microsoft Hyper-V are not allowed. High Availability: Upgrades of systems using the High Availability add-on are not allowed. Public Clouds: The in-place upgrade is not allowed for on-demand instances on Public Clouds. Third-party packages: The in-place upgrade is not allowed on systems using third-party packages, especially packages with third-party drivers that are needed for booting. The /usr directory: The in-place upgrade is not allowed on systems where the /usr directory is on a separate partition. For more information, see Why does Red Hat Enterprise Linux 6 to 7 in-place upgrade fail if /usr is on separate partition? . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/upgrading_from_rhel_6_to_rhel_7/planning-an-upgrade-from-rhel-6-to-rhel-7upgrading-from-rhel-6-to-rhel-7 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.